pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers |
# andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF
This model was converted to GGUF format from [`lemon-mint/gemma-2b-translation-v0.150`](https://huggingface.co/lemon-mint/gemma-2b-translation-v0.150) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/lemon-mint/gemma-2b-translation-v0.150) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF --model gemma-2b-translation-v0.150.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF --model gemma-2b-translation-v0.150.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gemma-2b-translation-v0.150.Q4_K_M.gguf -n 128
```
| {"language": ["ko"], "license": "gemma", "library_name": "transformers", "tags": ["gemma", "pytorch", "instruct", "finetune", "translation", "llama-cpp", "gguf-my-repo"], "base_model": "lemon-mint/gemma-ko-1.1-2b-it", "widget": [{"messages": [{"role": "user", "content": "Translate into Korean:Hamsters don't eat cats."}]}], "pipeline_tag": "text-generation"} | andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"gemma",
"pytorch",
"instruct",
"finetune",
"translation",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ko",
"base_model:lemon-mint/gemma-ko-1.1-2b-it",
"license:gemma",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T01:14:11+00:00 | [] | [
"ko"
] | TAGS
#transformers #gguf #gemma #pytorch #instruct #finetune #translation #llama-cpp #gguf-my-repo #text-generation #ko #base_model-lemon-mint/gemma-ko-1.1-2b-it #license-gemma #endpoints_compatible #region-us
|
# andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF
This model was converted to GGUF format from 'lemon-mint/gemma-2b-translation-v0.150' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'lemon-mint/gemma-2b-translation-v0.150' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #gemma #pytorch #instruct #finetune #translation #llama-cpp #gguf-my-repo #text-generation #ko #base_model-lemon-mint/gemma-ko-1.1-2b-it #license-gemma #endpoints_compatible #region-us \n",
"# andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'lemon-mint/gemma-2b-translation-v0.150' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
78,
87,
52
] | [
"TAGS\n#transformers #gguf #gemma #pytorch #instruct #finetune #translation #llama-cpp #gguf-my-repo #text-generation #ko #base_model-lemon-mint/gemma-ko-1.1-2b-it #license-gemma #endpoints_compatible #region-us \n# andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'lemon-mint/gemma-2b-translation-v0.150' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4826
- F1 Score: 0.7893
- Accuracy: 0.7888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5716 | 1.15 | 200 | 0.5526 | 0.7232 | 0.7229 |
| 0.5198 | 2.3 | 400 | 0.5708 | 0.6990 | 0.7039 |
| 0.4906 | 3.45 | 600 | 0.5257 | 0.7421 | 0.7424 |
| 0.4834 | 4.6 | 800 | 0.5103 | 0.7440 | 0.7442 |
| 0.4792 | 5.75 | 1000 | 0.5061 | 0.7574 | 0.7571 |
| 0.4697 | 6.9 | 1200 | 0.5028 | 0.7583 | 0.7578 |
| 0.4663 | 8.05 | 1400 | 0.5187 | 0.7451 | 0.7452 |
| 0.4617 | 9.2 | 1600 | 0.5189 | 0.7366 | 0.7384 |
| 0.4539 | 10.34 | 1800 | 0.5051 | 0.7600 | 0.7596 |
| 0.4513 | 11.49 | 2000 | 0.5022 | 0.7568 | 0.7567 |
| 0.4441 | 12.64 | 2200 | 0.5134 | 0.7474 | 0.7485 |
| 0.4441 | 13.79 | 2400 | 0.5256 | 0.7420 | 0.7442 |
| 0.4386 | 14.94 | 2600 | 0.4957 | 0.7596 | 0.7596 |
| 0.4343 | 16.09 | 2800 | 0.5198 | 0.7446 | 0.7463 |
| 0.4309 | 17.24 | 3000 | 0.5055 | 0.7608 | 0.7607 |
| 0.4261 | 18.39 | 3200 | 0.5004 | 0.7610 | 0.7607 |
| 0.427 | 19.54 | 3400 | 0.4949 | 0.7589 | 0.7589 |
| 0.4197 | 20.69 | 3600 | 0.4976 | 0.7673 | 0.7668 |
| 0.4211 | 21.84 | 3800 | 0.5279 | 0.7488 | 0.7503 |
| 0.4137 | 22.99 | 4000 | 0.5355 | 0.7462 | 0.7478 |
| 0.4159 | 24.14 | 4200 | 0.4833 | 0.7741 | 0.7737 |
| 0.4065 | 25.29 | 4400 | 0.5006 | 0.7661 | 0.7657 |
| 0.4073 | 26.44 | 4600 | 0.5198 | 0.7591 | 0.7593 |
| 0.4071 | 27.59 | 4800 | 0.5177 | 0.7584 | 0.7589 |
| 0.3981 | 28.74 | 5000 | 0.5070 | 0.7573 | 0.7575 |
| 0.4038 | 29.89 | 5200 | 0.5085 | 0.7685 | 0.7683 |
| 0.3935 | 31.03 | 5400 | 0.5313 | 0.7532 | 0.7542 |
| 0.3959 | 32.18 | 5600 | 0.5124 | 0.7676 | 0.7675 |
| 0.387 | 33.33 | 5800 | 0.5151 | 0.7710 | 0.7708 |
| 0.3946 | 34.48 | 6000 | 0.5046 | 0.7737 | 0.7733 |
| 0.3824 | 35.63 | 6200 | 0.5079 | 0.7748 | 0.7744 |
| 0.3887 | 36.78 | 6400 | 0.5168 | 0.7655 | 0.7654 |
| 0.3817 | 37.93 | 6600 | 0.5358 | 0.7587 | 0.7593 |
| 0.3819 | 39.08 | 6800 | 0.5097 | 0.7685 | 0.7683 |
| 0.3795 | 40.23 | 7000 | 0.5268 | 0.7590 | 0.7593 |
| 0.377 | 41.38 | 7200 | 0.5260 | 0.7626 | 0.7625 |
| 0.3792 | 42.53 | 7400 | 0.5261 | 0.7598 | 0.7600 |
| 0.376 | 43.68 | 7600 | 0.5163 | 0.7693 | 0.7690 |
| 0.3694 | 44.83 | 7800 | 0.5214 | 0.7647 | 0.7647 |
| 0.3722 | 45.98 | 8000 | 0.5140 | 0.7697 | 0.7693 |
| 0.3719 | 47.13 | 8200 | 0.5319 | 0.7581 | 0.7582 |
| 0.3696 | 48.28 | 8400 | 0.5281 | 0.7608 | 0.7607 |
| 0.3648 | 49.43 | 8600 | 0.5329 | 0.7561 | 0.7560 |
| 0.3661 | 50.57 | 8800 | 0.5336 | 0.7633 | 0.7632 |
| 0.3686 | 51.72 | 9000 | 0.5273 | 0.7692 | 0.7690 |
| 0.3636 | 52.87 | 9200 | 0.5321 | 0.7598 | 0.7596 |
| 0.3651 | 54.02 | 9400 | 0.5381 | 0.7581 | 0.7582 |
| 0.366 | 55.17 | 9600 | 0.5369 | 0.7596 | 0.7596 |
| 0.3648 | 56.32 | 9800 | 0.5287 | 0.7678 | 0.7675 |
| 0.3621 | 57.47 | 10000 | 0.5303 | 0.7641 | 0.7639 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:15:25+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K9ac-seqsight\_16384\_512\_56M-L8\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4826
* F1 Score: 0.7893
* Accuracy: 0.7888
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5010
- F1 Score: 0.7846
- Accuracy: 0.7841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5565 | 1.15 | 200 | 0.5465 | 0.7333 | 0.7334 |
| 0.5011 | 2.3 | 400 | 0.5553 | 0.6999 | 0.7060 |
| 0.4765 | 3.45 | 600 | 0.5192 | 0.7452 | 0.7460 |
| 0.4689 | 4.6 | 800 | 0.5017 | 0.7538 | 0.7542 |
| 0.4619 | 5.75 | 1000 | 0.5046 | 0.7607 | 0.7611 |
| 0.4479 | 6.9 | 1200 | 0.4935 | 0.7728 | 0.7726 |
| 0.4407 | 8.05 | 1400 | 0.4994 | 0.7679 | 0.7675 |
| 0.4289 | 9.2 | 1600 | 0.5391 | 0.7429 | 0.7449 |
| 0.4197 | 10.34 | 1800 | 0.5165 | 0.7561 | 0.7567 |
| 0.413 | 11.49 | 2000 | 0.4956 | 0.7697 | 0.7693 |
| 0.4003 | 12.64 | 2200 | 0.4967 | 0.7658 | 0.7661 |
| 0.3972 | 13.79 | 2400 | 0.5274 | 0.7491 | 0.7510 |
| 0.3863 | 14.94 | 2600 | 0.4881 | 0.7713 | 0.7708 |
| 0.3783 | 16.09 | 2800 | 0.5760 | 0.7378 | 0.7413 |
| 0.3673 | 17.24 | 3000 | 0.5253 | 0.7624 | 0.7629 |
| 0.3608 | 18.39 | 3200 | 0.5385 | 0.7592 | 0.7593 |
| 0.3588 | 19.54 | 3400 | 0.5170 | 0.7635 | 0.7632 |
| 0.3431 | 20.69 | 3600 | 0.5149 | 0.7730 | 0.7726 |
| 0.3393 | 21.84 | 3800 | 0.5352 | 0.7704 | 0.7701 |
| 0.3278 | 22.99 | 4000 | 0.5680 | 0.7584 | 0.7589 |
| 0.3275 | 24.14 | 4200 | 0.5353 | 0.7673 | 0.7668 |
| 0.3126 | 25.29 | 4400 | 0.5789 | 0.7625 | 0.7625 |
| 0.3121 | 26.44 | 4600 | 0.5664 | 0.7674 | 0.7672 |
| 0.302 | 27.59 | 4800 | 0.5861 | 0.7533 | 0.7539 |
| 0.2934 | 28.74 | 5000 | 0.5784 | 0.7569 | 0.7567 |
| 0.2937 | 29.89 | 5200 | 0.5977 | 0.7534 | 0.7531 |
| 0.2812 | 31.03 | 5400 | 0.5971 | 0.7575 | 0.7575 |
| 0.2787 | 32.18 | 5600 | 0.6287 | 0.7487 | 0.7492 |
| 0.2675 | 33.33 | 5800 | 0.6269 | 0.7643 | 0.7639 |
| 0.2674 | 34.48 | 6000 | 0.6238 | 0.7590 | 0.7585 |
| 0.2552 | 35.63 | 6200 | 0.6466 | 0.7610 | 0.7611 |
| 0.2587 | 36.78 | 6400 | 0.6403 | 0.7590 | 0.7589 |
| 0.2477 | 37.93 | 6600 | 0.6421 | 0.7539 | 0.7542 |
| 0.2405 | 39.08 | 6800 | 0.6798 | 0.7376 | 0.7380 |
| 0.2391 | 40.23 | 7000 | 0.6509 | 0.7511 | 0.7513 |
| 0.2355 | 41.38 | 7200 | 0.6706 | 0.7572 | 0.7571 |
| 0.2281 | 42.53 | 7400 | 0.7032 | 0.7441 | 0.7449 |
| 0.2321 | 43.68 | 7600 | 0.6918 | 0.7460 | 0.7463 |
| 0.2237 | 44.83 | 7800 | 0.7034 | 0.7502 | 0.7499 |
| 0.2214 | 45.98 | 8000 | 0.6958 | 0.7582 | 0.7578 |
| 0.2179 | 47.13 | 8200 | 0.7049 | 0.7534 | 0.7531 |
| 0.2125 | 48.28 | 8400 | 0.7326 | 0.7488 | 0.7488 |
| 0.2101 | 49.43 | 8600 | 0.7270 | 0.7541 | 0.7539 |
| 0.2086 | 50.57 | 8800 | 0.7434 | 0.7493 | 0.7492 |
| 0.2076 | 51.72 | 9000 | 0.7319 | 0.7508 | 0.7506 |
| 0.2024 | 52.87 | 9200 | 0.7368 | 0.7509 | 0.7506 |
| 0.2052 | 54.02 | 9400 | 0.7500 | 0.7498 | 0.7496 |
| 0.2042 | 55.17 | 9600 | 0.7443 | 0.7500 | 0.7499 |
| 0.2046 | 56.32 | 9800 | 0.7369 | 0.7530 | 0.7528 |
| 0.2003 | 57.47 | 10000 | 0.7377 | 0.7545 | 0.7542 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:16:19+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K9ac-seqsight\_16384\_512\_56M-L32\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5010
* F1 Score: 0.7846
* Accuracy: 0.7841
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5803
- F1 Score: 0.6958
- Accuracy: 0.6962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6495 | 0.87 | 200 | 0.6251 | 0.6533 | 0.6546 |
| 0.6245 | 1.74 | 400 | 0.6136 | 0.6606 | 0.6603 |
| 0.6133 | 2.61 | 600 | 0.6028 | 0.6698 | 0.6696 |
| 0.6036 | 3.48 | 800 | 0.5988 | 0.6740 | 0.6739 |
| 0.5992 | 4.35 | 1000 | 0.5990 | 0.6716 | 0.6717 |
| 0.5932 | 5.22 | 1200 | 0.5979 | 0.6684 | 0.6704 |
| 0.5904 | 6.09 | 1400 | 0.6170 | 0.6531 | 0.6598 |
| 0.5855 | 6.96 | 1600 | 0.5982 | 0.6715 | 0.6728 |
| 0.5823 | 7.83 | 1800 | 0.5914 | 0.6750 | 0.6747 |
| 0.5822 | 8.7 | 2000 | 0.5944 | 0.6728 | 0.6731 |
| 0.5776 | 9.57 | 2200 | 0.5857 | 0.6815 | 0.6813 |
| 0.5782 | 10.43 | 2400 | 0.5919 | 0.6794 | 0.6807 |
| 0.5738 | 11.3 | 2600 | 0.5848 | 0.6793 | 0.6807 |
| 0.5775 | 12.17 | 2800 | 0.5838 | 0.6824 | 0.6826 |
| 0.574 | 13.04 | 3000 | 0.5863 | 0.6777 | 0.6780 |
| 0.5706 | 13.91 | 3200 | 0.5819 | 0.6848 | 0.6851 |
| 0.5682 | 14.78 | 3400 | 0.5903 | 0.6730 | 0.6753 |
| 0.5686 | 15.65 | 3600 | 0.5853 | 0.6833 | 0.6842 |
| 0.5688 | 16.52 | 3800 | 0.5854 | 0.6798 | 0.6802 |
| 0.565 | 17.39 | 4000 | 0.5885 | 0.6834 | 0.6842 |
| 0.5676 | 18.26 | 4200 | 0.5839 | 0.6875 | 0.6880 |
| 0.5633 | 19.13 | 4400 | 0.5891 | 0.6838 | 0.6837 |
| 0.5633 | 20.0 | 4600 | 0.5894 | 0.6824 | 0.6837 |
| 0.5635 | 20.87 | 4800 | 0.5853 | 0.6881 | 0.6886 |
| 0.5612 | 21.74 | 5000 | 0.5876 | 0.6830 | 0.6840 |
| 0.5616 | 22.61 | 5200 | 0.5826 | 0.6879 | 0.6883 |
| 0.5609 | 23.48 | 5400 | 0.5954 | 0.6762 | 0.6802 |
| 0.5588 | 24.35 | 5600 | 0.5846 | 0.6876 | 0.6883 |
| 0.5608 | 25.22 | 5800 | 0.5918 | 0.6831 | 0.6861 |
| 0.555 | 26.09 | 6000 | 0.5926 | 0.6805 | 0.6829 |
| 0.5598 | 26.96 | 6200 | 0.5937 | 0.6812 | 0.6845 |
| 0.5559 | 27.83 | 6400 | 0.5982 | 0.6811 | 0.6853 |
| 0.5572 | 28.7 | 6600 | 0.5832 | 0.6869 | 0.6875 |
| 0.5538 | 29.57 | 6800 | 0.5808 | 0.6892 | 0.6899 |
| 0.5524 | 30.43 | 7000 | 0.5905 | 0.6841 | 0.6867 |
| 0.5589 | 31.3 | 7200 | 0.5872 | 0.6862 | 0.6883 |
| 0.5546 | 32.17 | 7400 | 0.5859 | 0.6849 | 0.6867 |
| 0.554 | 33.04 | 7600 | 0.5824 | 0.6875 | 0.6883 |
| 0.553 | 33.91 | 7800 | 0.5832 | 0.6861 | 0.6872 |
| 0.5554 | 34.78 | 8000 | 0.5845 | 0.6885 | 0.6897 |
| 0.5508 | 35.65 | 8200 | 0.5826 | 0.6879 | 0.6889 |
| 0.552 | 36.52 | 8400 | 0.5838 | 0.6890 | 0.6902 |
| 0.5521 | 37.39 | 8600 | 0.5829 | 0.6895 | 0.6902 |
| 0.5482 | 38.26 | 8800 | 0.5892 | 0.6860 | 0.6880 |
| 0.5518 | 39.13 | 9000 | 0.5868 | 0.6884 | 0.6902 |
| 0.5496 | 40.0 | 9200 | 0.5825 | 0.6890 | 0.6897 |
| 0.5477 | 40.87 | 9400 | 0.5829 | 0.6902 | 0.6908 |
| 0.5498 | 41.74 | 9600 | 0.5841 | 0.6865 | 0.6875 |
| 0.5556 | 42.61 | 9800 | 0.5824 | 0.6879 | 0.6889 |
| 0.5468 | 43.48 | 10000 | 0.5833 | 0.6873 | 0.6883 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:17:22+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K4me3-seqsight\_16384\_512\_56M-L1\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5803
* F1 Score: 0.6958
* Accuracy: 0.6962
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/databricks/dbrx-instruct
(actually the f16 from https://huggingface.co/dranger003/dbrx-instruct-iMat.GGUF as llama.cpp seems to have broken dbrx support currently)
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/dbrx-instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q2_K.gguf) | Q2_K | 48.0 | |
| [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ3_XS.gguf.part2of2) | IQ3_XS | 53.9 | |
| [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ3_S.gguf.part2of2) | IQ3_S | 56.9 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q3_K_S.gguf.part2of2) | Q3_K_S | 56.9 | |
| [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ3_M.gguf.part2of2) | IQ3_M | 58.1 | |
| [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q3_K_M.gguf.part2of2) | Q3_K_M | 63.2 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q3_K_L.gguf.part2of2) | Q3_K_L | 68.5 | |
| [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.IQ4_XS.gguf.part2of2) | IQ4_XS | 71.0 | |
| [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q4_K_S.gguf.part2of2) | Q4_K_S | 75.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q4_K_M.gguf.part2of2) | Q4_K_M | 80.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q5_K_S.gguf.part2of2) | Q5_K_S | 90.7 | |
| [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q5_K_M.gguf.part2of2) | Q5_K_M | 93.7 | |
| [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q6_K.gguf.part3of3) | Q6_K | 108.1 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/dbrx-instruct-GGUF/resolve/main/dbrx-instruct.Q8_0.gguf.part3of3) | Q8_0 | 139.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "base_model": "databricks/dbrx-instruct", "quantized_by": "mradermacher"} | mradermacher/dbrx-instruct-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:databricks/dbrx-instruct",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T01:19:02+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-databricks/dbrx-instruct #endpoints_compatible #region-us
| About
-----
static quants of URL
(actually the f16 from URL as URL seems to have broken dbrx support currently)
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-databricks/dbrx-instruct #endpoints_compatible #region-us \n"
] | [
33
] | [
"TAGS\n#transformers #gguf #en #base_model-databricks/dbrx-instruct #endpoints_compatible #region-us \n"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA20
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3593 | 0.09 | 10 | 0.1698 |
| 0.1623 | 0.18 | 20 | 0.1542 |
| 0.1492 | 0.27 | 30 | 0.1619 |
| 0.156 | 0.36 | 40 | 0.1530 |
| 0.1529 | 0.45 | 50 | 0.1495 |
| 0.1518 | 0.54 | 60 | 0.1483 |
| 0.1518 | 0.63 | 70 | 0.1469 |
| 0.1508 | 0.73 | 80 | 0.1569 |
| 0.1497 | 0.82 | 90 | 0.1489 |
| 0.1478 | 0.91 | 100 | 0.1490 |
| 0.1511 | 1.0 | 110 | 0.1499 |
| 0.1467 | 1.09 | 120 | 0.1471 |
| 0.1462 | 1.18 | 130 | 0.1528 |
| 0.1483 | 1.27 | 140 | 0.1490 |
| 0.1493 | 1.36 | 150 | 0.1513 |
| 0.146 | 1.45 | 160 | 0.1485 |
| 0.1463 | 1.54 | 170 | 0.1478 |
| 0.1484 | 1.63 | 180 | 0.1456 |
| 0.1469 | 1.72 | 190 | 0.1502 |
| 0.1456 | 1.81 | 200 | 0.1482 |
| 0.1494 | 1.9 | 210 | 0.1474 |
| 0.1457 | 1.99 | 220 | 0.1485 |
| 0.1449 | 2.08 | 230 | 0.1455 |
| 0.1381 | 2.18 | 240 | 0.1442 |
| 0.1399 | 2.27 | 250 | 0.1440 |
| 0.1412 | 2.36 | 260 | 0.1475 |
| 0.1391 | 2.45 | 270 | 0.1420 |
| 0.1351 | 2.54 | 280 | 0.1410 |
| 0.1331 | 2.63 | 290 | 0.1386 |
| 0.1349 | 2.72 | 300 | 0.1354 |
| 0.1317 | 2.81 | 310 | 0.1350 |
| 0.1301 | 2.9 | 320 | 0.1353 |
| 0.1327 | 2.99 | 330 | 0.1352 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA20", "results": []}]} | Litzy619/O0428HMA20 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T01:19:14+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA20
==========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1352
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 60
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
35,
160,
5,
47
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
null | transformers |
# Uploaded model
- **Developed by:** Kairaz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | Kairaz/games | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T01:20:23+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: Kairaz
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: Kairaz\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: Kairaz\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
64,
79
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: Kairaz\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 18
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "microsoft/Phi-3-mini-4k-instruct", "model-index": [{"name": "trainer", "results": []}]} | Surabhi-K/trainer | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2024-04-30T01:20:54+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #region-us
|
# trainer
This model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 18
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# trainer\n\nThis model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 18\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- PEFT 0.7.1\n- Transformers 4.36.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #region-us \n",
"# trainer\n\nThis model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 18\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- PEFT 0.7.1\n- Transformers 4.36.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
40,
30,
7,
9,
9,
4,
133,
48
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #region-us \n# trainer\n\nThis model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 18\n- mixed_precision_training: Native AMP### Framework versions\n\n- PEFT 0.7.1\n- Transformers 4.36.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetune
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7793
- Accuracy: 0.7993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7582 | 1.0 | 2299 | 0.5703 | 0.7783 |
| 0.381 | 2.0 | 4598 | 0.5787 | 0.7950 |
| 0.1529 | 3.0 | 6897 | 0.7793 | 0.7993 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-base-uncased-finetune", "results": []}]} | avikumar/bert-base-uncased-finetune | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T01:21:16+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #multiple-choice #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
| bert-base-uncased-finetune
==========================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7793
* Accuracy: 0.7993
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #multiple-choice #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
50,
101,
5,
40
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #multiple-choice #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA9
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6202 | 0.09 | 10 | 0.2442 |
| 0.1807 | 0.18 | 20 | 0.1525 |
| 0.1486 | 0.27 | 30 | 0.1701 |
| 0.1564 | 0.36 | 40 | 0.1538 |
| 0.1507 | 0.45 | 50 | 0.1492 |
| 0.1511 | 0.54 | 60 | 0.1474 |
| 0.1491 | 0.63 | 70 | 0.1472 |
| 0.1496 | 0.73 | 80 | 0.1551 |
| 0.1466 | 0.82 | 90 | 0.1500 |
| 0.1496 | 0.91 | 100 | 0.1495 |
| 0.1516 | 1.0 | 110 | 0.1463 |
| 0.1509 | 1.09 | 120 | 0.1321 |
| 0.3642 | 1.18 | 130 | 0.2426 |
| 0.179 | 1.27 | 140 | 0.1081 |
| 0.1519 | 1.36 | 150 | 0.1300 |
| 0.272 | 1.45 | 160 | 0.0911 |
| 0.0746 | 1.54 | 170 | 0.0694 |
| 0.0657 | 1.63 | 180 | 0.0619 |
| 0.0678 | 1.72 | 190 | 0.0584 |
| 0.0578 | 1.81 | 200 | 0.0592 |
| 0.0577 | 1.9 | 210 | 0.0612 |
| 0.0599 | 1.99 | 220 | 0.0554 |
| 0.0587 | 2.08 | 230 | 0.0568 |
| 0.0538 | 2.18 | 240 | 0.0564 |
| 0.0562 | 2.27 | 250 | 0.0581 |
| 0.0591 | 2.36 | 260 | 0.0568 |
| 0.0537 | 2.45 | 270 | 0.0551 |
| 0.0523 | 2.54 | 280 | 0.0557 |
| 0.0548 | 2.63 | 290 | 0.0566 |
| 0.056 | 2.72 | 300 | 0.0545 |
| 0.0569 | 2.81 | 310 | 0.0543 |
| 0.0584 | 2.9 | 320 | 0.0545 |
| 0.0604 | 2.99 | 330 | 0.0545 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA9", "results": []}]} | Litzy619/O0428HMA9 | null | [
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T01:21:17+00:00 | [] | [] | TAGS
#generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA9
=========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0545
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
31,
160,
5,
47
] | [
"TAGS\n#generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 | {"library_name": "peft", "base_model": "microsoft/Phi-3-mini-4k-instruct"} | Surabhi-K/phi3_18epochs | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"region:us"
] | null | 2024-04-30T01:21:40+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-microsoft/Phi-3-mini-4k-instruct #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.7.1 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.1"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-microsoft/Phi-3-mini-4k-instruct #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.1"
] | [
40,
6,
4,
50,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5,
13
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-microsoft/Phi-3-mini-4k-instruct #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact### Framework versions\n\n- PEFT 0.7.1"
] |
text-generation | transformers | <a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a>
# Llama-3 8B Gradient Instruct 1048k
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected].
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.

**Approach:**
- [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
- Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below)
**Infra:**
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
**Data:**
For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
**Progressive Training Details:**
| | 65K | 262K | 524k | 1048k |
|------------------------|-----------|-----------|-----------|-----------|
| Initialize From | LLaMA-3 8B| 65K | 262K | 524k |
| Sequence Length 2^N | 16 | 18 | 19 | 20 |
| RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B |
| Batch Size | 1 | 1 | 16 | 16 |
| Gradient Accumulation Steps | 32 | 16 | 1 | 1 |
| Steps | 30 | 24 | 50 | 50 |
| Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 |
| Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
| # GPUs | 8 | 32 | 512 | 512 |
| GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
| Minutes to Train (Wall)| 202 | 555 | 61 | 87 |
**Quants**:
- [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF)
- [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit)
## The Gradient AI Team
https://gradient.ai/
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
## Contact Us
Drop an email to [[email protected]](mailto:[email protected])
## References
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] https://github.com/jzhang38/EasyContext
----
# Base Model
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos | {"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"} | blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw3.5-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T01:23:51+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| [<img src="URL width="200"/>](URL)
Llama-3 8B Gradient Instruct 1048k
==================================
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@URL.
For more info see our End-to-end development service for custom LLMs and AI systems
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from Crusoe Energy. It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
!image/png
Approach:
* meta-llama/Meta-Llama-3-8B-Instruct as the base
* NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
* Progressive training on increasing context lengths, similar to Large World Model [2] (See details below)
Infra:
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on Crusoe Energy high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
Data:
For training data, we generate long contexts by augmenting SlimPajama.
Progressive Training Details:
Quants:
* GGUF
* MLX-4bit
The Gradient AI Team
--------------------
URL
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
Contact Us
----------
Drop an email to contact@URL
References
----------
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] URL
---
Base Model
==========
Model Details
-------------
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date April 18, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: URL
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
Intended Use
------------
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
How to use
----------
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original 'llama3' codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.
#### Transformers pipeline
#### Transformers AutoModelForCausalLM
### Use with 'llama3'
Please, follow the instructions in the repository
To download Original checkpoints, see the example command below leveraging 'huggingface-cli':
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
Hardware and Software
---------------------
Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.
CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
-------------
Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
Benchmarks
----------
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.
### Base pretrained models
### Instruction tuned models
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
Safety
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
Refusals
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL
#### Critical risks
CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### Cyber Security
We have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.
### Child Safety
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.
Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.
Ethical Considerations and Limitations
--------------------------------------
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at URL
instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {URL
}
Contributors
------------
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
| [
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
52,
42,
6,
13,
429,
8,
6,
270,
280,
72,
115,
118,
126,
2136
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.#### Transformers pipeline#### Transformers AutoModelForCausalLM### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.### Base pretrained models### Instruction tuned models### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-rating-poem
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the poem_sentiment dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1902
- Accuracy: 0.8762
- F1: 0.8765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0599 | 0.45 | 50 | 1.0247 | 0.8571 | 0.8611 |
| 0.1257 | 0.89 | 100 | 1.1237 | 0.8571 | 0.8500 |
| 0.032 | 1.34 | 150 | 1.1346 | 0.8667 | 0.8567 |
| 0.0012 | 1.79 | 200 | 1.2181 | 0.8381 | 0.8373 |
| 0.0954 | 2.23 | 250 | 1.0423 | 0.8762 | 0.8667 |
| 0.0323 | 2.68 | 300 | 1.0560 | 0.8667 | 0.8715 |
| 0.0128 | 3.12 | 350 | 1.1156 | 0.8857 | 0.8809 |
| 0.0269 | 3.57 | 400 | 1.1702 | 0.8762 | 0.8681 |
| 0.0172 | 4.02 | 450 | 1.1968 | 0.8667 | 0.8678 |
| 0.0004 | 4.46 | 500 | 1.1906 | 0.8762 | 0.8765 |
| 0.0117 | 4.91 | 550 | 1.1902 | 0.8762 | 0.8765 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.2.2
- Datasets 2.12.0
- Tokenizers 0.13.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["poem_sentiment"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-rating-poem", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "poem_sentiment", "type": "poem_sentiment", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8761904761904762, "name": "Accuracy"}, {"type": "f1", "value": 0.8765098002671388, "name": "F1"}]}]}]} | VuaCoBac/distilbert-base-uncased-finetuned-rating-poem | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:poem_sentiment",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T01:25:03+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-poem_sentiment #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-rating-poem
=============================================
This model is a fine-tuned version of distilbert-base-uncased on the poem\_sentiment dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1902
* Accuracy: 0.8762
* F1: 0.8765
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.32.1
* Pytorch 2.2.2
* Datasets 2.12.0
* Tokenizers 0.13.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.2.2\n* Datasets 2.12.0\n* Tokenizers 0.13.2"
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-poem_sentiment #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.2.2\n* Datasets 2.12.0\n* Tokenizers 0.13.2"
] | [
68,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-poem_sentiment #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.2.2\n* Datasets 2.12.0\n* Tokenizers 0.13.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jski/UltraMerge-v2-7B | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T01:25:22+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
44,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]} | clarkchan/llama3-8b-alpaca-cn | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-30T01:27:01+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #unsloth #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #unsloth #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
58,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #unsloth #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5969
- F1 Score: 0.6975
- Accuracy: 0.6976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6432 | 0.87 | 200 | 0.6124 | 0.6610 | 0.6609 |
| 0.6088 | 1.74 | 400 | 0.6067 | 0.6690 | 0.6707 |
| 0.5947 | 2.61 | 600 | 0.5922 | 0.6828 | 0.6829 |
| 0.5882 | 3.48 | 800 | 0.5893 | 0.6794 | 0.6793 |
| 0.5822 | 4.35 | 1000 | 0.5875 | 0.6769 | 0.6766 |
| 0.5758 | 5.22 | 1200 | 0.5858 | 0.6832 | 0.6848 |
| 0.5728 | 6.09 | 1400 | 0.6043 | 0.6695 | 0.6742 |
| 0.5666 | 6.96 | 1600 | 0.5931 | 0.6826 | 0.6840 |
| 0.5612 | 7.83 | 1800 | 0.5899 | 0.6813 | 0.6810 |
| 0.5593 | 8.7 | 2000 | 0.5884 | 0.6871 | 0.6875 |
| 0.5557 | 9.57 | 2200 | 0.5817 | 0.6863 | 0.6864 |
| 0.5536 | 10.43 | 2400 | 0.5959 | 0.6865 | 0.6891 |
| 0.5501 | 11.3 | 2600 | 0.5791 | 0.6954 | 0.6970 |
| 0.5528 | 12.17 | 2800 | 0.5763 | 0.6920 | 0.6924 |
| 0.5447 | 13.04 | 3000 | 0.5880 | 0.6907 | 0.6929 |
| 0.5401 | 13.91 | 3200 | 0.5858 | 0.6926 | 0.6946 |
| 0.5375 | 14.78 | 3400 | 0.5954 | 0.6903 | 0.6937 |
| 0.5371 | 15.65 | 3600 | 0.5845 | 0.6852 | 0.6883 |
| 0.5352 | 16.52 | 3800 | 0.5785 | 0.6947 | 0.6948 |
| 0.5285 | 17.39 | 4000 | 0.6022 | 0.6984 | 0.7003 |
| 0.5315 | 18.26 | 4200 | 0.5866 | 0.6940 | 0.6959 |
| 0.5242 | 19.13 | 4400 | 0.5850 | 0.6995 | 0.6995 |
| 0.5238 | 20.0 | 4600 | 0.5912 | 0.6982 | 0.7008 |
| 0.5193 | 20.87 | 4800 | 0.5875 | 0.6972 | 0.6976 |
| 0.5196 | 21.74 | 5000 | 0.5850 | 0.6949 | 0.6951 |
| 0.5183 | 22.61 | 5200 | 0.5878 | 0.6933 | 0.6948 |
| 0.5173 | 23.48 | 5400 | 0.5961 | 0.6909 | 0.6943 |
| 0.5097 | 24.35 | 5600 | 0.5933 | 0.6947 | 0.6965 |
| 0.5118 | 25.22 | 5800 | 0.5924 | 0.6993 | 0.7 |
| 0.5061 | 26.09 | 6000 | 0.6060 | 0.6951 | 0.6970 |
| 0.5106 | 26.96 | 6200 | 0.5891 | 0.6928 | 0.6957 |
| 0.5045 | 27.83 | 6400 | 0.6064 | 0.6856 | 0.6889 |
| 0.5042 | 28.7 | 6600 | 0.5888 | 0.6982 | 0.6981 |
| 0.5017 | 29.57 | 6800 | 0.5842 | 0.6985 | 0.6989 |
| 0.4993 | 30.43 | 7000 | 0.5908 | 0.6971 | 0.6984 |
| 0.5033 | 31.3 | 7200 | 0.5922 | 0.7005 | 0.7011 |
| 0.5005 | 32.17 | 7400 | 0.5878 | 0.6983 | 0.6986 |
| 0.4961 | 33.04 | 7600 | 0.5890 | 0.7012 | 0.7014 |
| 0.4948 | 33.91 | 7800 | 0.5893 | 0.6981 | 0.6989 |
| 0.4955 | 34.78 | 8000 | 0.5919 | 0.7009 | 0.7014 |
| 0.4931 | 35.65 | 8200 | 0.5915 | 0.7000 | 0.7 |
| 0.4898 | 36.52 | 8400 | 0.5890 | 0.6999 | 0.7 |
| 0.4875 | 37.39 | 8600 | 0.5926 | 0.6985 | 0.6984 |
| 0.4874 | 38.26 | 8800 | 0.5965 | 0.7008 | 0.7014 |
| 0.4915 | 39.13 | 9000 | 0.5920 | 0.7020 | 0.7022 |
| 0.486 | 40.0 | 9200 | 0.5944 | 0.6986 | 0.6984 |
| 0.4873 | 40.87 | 9400 | 0.5935 | 0.7029 | 0.7030 |
| 0.4862 | 41.74 | 9600 | 0.5929 | 0.7023 | 0.7024 |
| 0.4929 | 42.61 | 9800 | 0.5914 | 0.7015 | 0.7016 |
| 0.4828 | 43.48 | 10000 | 0.5937 | 0.7034 | 0.7035 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:27:22+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K4me3-seqsight\_16384\_512\_56M-L8\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5969
* F1 Score: 0.6975
* Accuracy: 0.6976
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7298
- F1 Score: 0.7016
- Accuracy: 0.7014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6389 | 0.87 | 200 | 0.6063 | 0.6685 | 0.6682 |
| 0.6006 | 1.74 | 400 | 0.6004 | 0.6810 | 0.6823 |
| 0.5855 | 2.61 | 600 | 0.5854 | 0.6826 | 0.6834 |
| 0.5775 | 3.48 | 800 | 0.5817 | 0.6907 | 0.6905 |
| 0.5686 | 4.35 | 1000 | 0.5836 | 0.6871 | 0.6870 |
| 0.5597 | 5.22 | 1200 | 0.5855 | 0.6867 | 0.6878 |
| 0.555 | 6.09 | 1400 | 0.5977 | 0.6832 | 0.6856 |
| 0.5465 | 6.96 | 1600 | 0.5786 | 0.7001 | 0.7 |
| 0.5366 | 7.83 | 1800 | 0.5892 | 0.6940 | 0.6937 |
| 0.5307 | 8.7 | 2000 | 0.5852 | 0.6975 | 0.6973 |
| 0.5226 | 9.57 | 2200 | 0.5890 | 0.6929 | 0.6940 |
| 0.5192 | 10.43 | 2400 | 0.6053 | 0.6946 | 0.6962 |
| 0.5129 | 11.3 | 2600 | 0.5802 | 0.6979 | 0.6984 |
| 0.5075 | 12.17 | 2800 | 0.6029 | 0.6850 | 0.6856 |
| 0.4983 | 13.04 | 3000 | 0.5983 | 0.6980 | 0.6989 |
| 0.4894 | 13.91 | 3200 | 0.5995 | 0.6991 | 0.6992 |
| 0.4812 | 14.78 | 3400 | 0.6421 | 0.6874 | 0.6889 |
| 0.4747 | 15.65 | 3600 | 0.6179 | 0.6899 | 0.6929 |
| 0.4691 | 16.52 | 3800 | 0.6068 | 0.6935 | 0.6943 |
| 0.4593 | 17.39 | 4000 | 0.6400 | 0.6920 | 0.6924 |
| 0.458 | 18.26 | 4200 | 0.6236 | 0.6997 | 0.7014 |
| 0.4482 | 19.13 | 4400 | 0.6311 | 0.6921 | 0.6921 |
| 0.4433 | 20.0 | 4600 | 0.6343 | 0.6947 | 0.6951 |
| 0.4326 | 20.87 | 4800 | 0.6531 | 0.6964 | 0.6965 |
| 0.4294 | 21.74 | 5000 | 0.6335 | 0.6938 | 0.6937 |
| 0.425 | 22.61 | 5200 | 0.6397 | 0.6950 | 0.6954 |
| 0.4206 | 23.48 | 5400 | 0.6499 | 0.6965 | 0.6970 |
| 0.4128 | 24.35 | 5600 | 0.6704 | 0.7029 | 0.7038 |
| 0.4089 | 25.22 | 5800 | 0.6735 | 0.6975 | 0.6973 |
| 0.4042 | 26.09 | 6000 | 0.6734 | 0.7021 | 0.7027 |
| 0.4003 | 26.96 | 6200 | 0.6617 | 0.6964 | 0.6976 |
| 0.3907 | 27.83 | 6400 | 0.6731 | 0.6968 | 0.6976 |
| 0.3843 | 28.7 | 6600 | 0.6912 | 0.6900 | 0.6899 |
| 0.3804 | 29.57 | 6800 | 0.6820 | 0.6957 | 0.6954 |
| 0.3831 | 30.43 | 7000 | 0.6843 | 0.6929 | 0.6927 |
| 0.3766 | 31.3 | 7200 | 0.6948 | 0.7019 | 0.7019 |
| 0.3749 | 32.17 | 7400 | 0.6839 | 0.6965 | 0.6965 |
| 0.3661 | 33.04 | 7600 | 0.6864 | 0.6994 | 0.6997 |
| 0.3648 | 33.91 | 7800 | 0.6997 | 0.6982 | 0.6984 |
| 0.3635 | 34.78 | 8000 | 0.7016 | 0.6964 | 0.6962 |
| 0.3593 | 35.65 | 8200 | 0.7018 | 0.6965 | 0.6962 |
| 0.3513 | 36.52 | 8400 | 0.7165 | 0.6962 | 0.6959 |
| 0.3509 | 37.39 | 8600 | 0.7196 | 0.7045 | 0.7043 |
| 0.3461 | 38.26 | 8800 | 0.7234 | 0.7018 | 0.7016 |
| 0.349 | 39.13 | 9000 | 0.7181 | 0.6974 | 0.6973 |
| 0.3445 | 40.0 | 9200 | 0.7203 | 0.6981 | 0.6978 |
| 0.3464 | 40.87 | 9400 | 0.7161 | 0.6948 | 0.6946 |
| 0.3407 | 41.74 | 9600 | 0.7187 | 0.6967 | 0.6965 |
| 0.343 | 42.61 | 9800 | 0.7229 | 0.6978 | 0.6976 |
| 0.3365 | 43.48 | 10000 | 0.7276 | 0.6959 | 0.6957 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:27:48+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K4me3-seqsight\_16384\_512\_56M-L32\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7298
* F1 Score: 0.7016
* Accuracy: 0.7014
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2576
- F1 Score: 0.9083
- Accuracy: 0.9083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3259 | 2.17 | 200 | 0.2859 | 0.8885 | 0.8884 |
| 0.2744 | 4.35 | 400 | 0.2952 | 0.8874 | 0.8871 |
| 0.2658 | 6.52 | 600 | 0.2808 | 0.8907 | 0.8905 |
| 0.2644 | 8.7 | 800 | 0.2921 | 0.8922 | 0.8919 |
| 0.2556 | 10.87 | 1000 | 0.2708 | 0.8967 | 0.8966 |
| 0.253 | 13.04 | 1200 | 0.2768 | 0.8969 | 0.8966 |
| 0.2482 | 15.22 | 1400 | 0.2714 | 0.8913 | 0.8912 |
| 0.2444 | 17.39 | 1600 | 0.2728 | 0.8976 | 0.8973 |
| 0.2407 | 19.57 | 1800 | 0.2639 | 0.8932 | 0.8932 |
| 0.2397 | 21.74 | 2000 | 0.2797 | 0.8928 | 0.8925 |
| 0.2345 | 23.91 | 2200 | 0.2662 | 0.8975 | 0.8973 |
| 0.2327 | 26.09 | 2400 | 0.2734 | 0.8921 | 0.8919 |
| 0.2288 | 28.26 | 2600 | 0.2632 | 0.8953 | 0.8953 |
| 0.2254 | 30.43 | 2800 | 0.2632 | 0.8913 | 0.8912 |
| 0.2224 | 32.61 | 3000 | 0.2648 | 0.8945 | 0.8946 |
| 0.2193 | 34.78 | 3200 | 0.2640 | 0.8960 | 0.8960 |
| 0.2171 | 36.96 | 3400 | 0.2628 | 0.8960 | 0.8960 |
| 0.2162 | 39.13 | 3600 | 0.2616 | 0.8933 | 0.8932 |
| 0.2111 | 41.3 | 3800 | 0.2631 | 0.8993 | 0.8994 |
| 0.2072 | 43.48 | 4000 | 0.2666 | 0.8918 | 0.8919 |
| 0.2155 | 45.65 | 4200 | 0.2627 | 0.8972 | 0.8973 |
| 0.2039 | 47.83 | 4400 | 0.2622 | 0.8958 | 0.8960 |
| 0.2046 | 50.0 | 4600 | 0.2662 | 0.8936 | 0.8939 |
| 0.201 | 52.17 | 4800 | 0.2643 | 0.8978 | 0.8980 |
| 0.2031 | 54.35 | 5000 | 0.2653 | 0.8986 | 0.8987 |
| 0.1967 | 56.52 | 5200 | 0.2676 | 0.8974 | 0.8973 |
| 0.1968 | 58.7 | 5400 | 0.2658 | 0.8952 | 0.8953 |
| 0.1924 | 60.87 | 5600 | 0.2702 | 0.8972 | 0.8973 |
| 0.1914 | 63.04 | 5800 | 0.2702 | 0.8946 | 0.8946 |
| 0.1945 | 65.22 | 6000 | 0.2674 | 0.8992 | 0.8994 |
| 0.1906 | 67.39 | 6200 | 0.2662 | 0.8966 | 0.8966 |
| 0.1873 | 69.57 | 6400 | 0.2693 | 0.8971 | 0.8973 |
| 0.1881 | 71.74 | 6600 | 0.2693 | 0.8978 | 0.8980 |
| 0.186 | 73.91 | 6800 | 0.2660 | 0.8979 | 0.8980 |
| 0.184 | 76.09 | 7000 | 0.2678 | 0.9001 | 0.9001 |
| 0.1843 | 78.26 | 7200 | 0.2671 | 0.8972 | 0.8973 |
| 0.1847 | 80.43 | 7400 | 0.2657 | 0.8972 | 0.8973 |
| 0.1818 | 82.61 | 7600 | 0.2691 | 0.8957 | 0.8960 |
| 0.1842 | 84.78 | 7800 | 0.2678 | 0.8972 | 0.8973 |
| 0.1819 | 86.96 | 8000 | 0.2686 | 0.8950 | 0.8953 |
| 0.1822 | 89.13 | 8200 | 0.2681 | 0.8957 | 0.8960 |
| 0.1784 | 91.3 | 8400 | 0.2716 | 0.8936 | 0.8939 |
| 0.1759 | 93.48 | 8600 | 0.2760 | 0.8928 | 0.8932 |
| 0.179 | 95.65 | 8800 | 0.2755 | 0.8928 | 0.8932 |
| 0.1801 | 97.83 | 9000 | 0.2704 | 0.8943 | 0.8946 |
| 0.1782 | 100.0 | 9200 | 0.2700 | 0.8951 | 0.8953 |
| 0.1785 | 102.17 | 9400 | 0.2705 | 0.8936 | 0.8939 |
| 0.1781 | 104.35 | 9600 | 0.2707 | 0.8943 | 0.8946 |
| 0.1751 | 106.52 | 9800 | 0.2724 | 0.8935 | 0.8939 |
| 0.1759 | 108.7 | 10000 | 0.2719 | 0.8929 | 0.8932 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H4-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:27:53+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H4-seqsight\_16384\_512\_56M-L1\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2576
* F1 Score: 0.9083
* Accuracy: 0.9083
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
## Exllama v2 Quantizations of starcoder2-15b-instruct-v0.1
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turboderp's ExLlamaV2 v0.0.20</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1
## Prompt format
```
<|endoftext|>You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
### Instruction
{prompt}
### Response
<|endoftext|>
```
## Available sizes
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-exl2/tree/8_0) | 8.0 | 8.0 | 15.8 GB | 16.8 GB | 18.1 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-exl2/tree/6_5) | 6.5 | 8.0 | 13.9 GB | 14.9 GB | 16.2 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-exl2/tree/5_0) | 5.0 | 6.0 | 11.0 GB | 12.0 GB | 13.2 GB | Slightly lower quality vs 6.5. |
| [4_25](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-exl2/tree/4_25) | 4.25 | 6.0 | 9.5 GB | 10.5 GB | 11.8 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-exl2/tree/3_5) | 3.5 | 6.0 | 8.1 GB | 9.1 GB | 10.4 GB | Lower quality, not recommended. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/starcoder2-15b-instruct-v0.1-exl2 starcoder2-15b-instruct-v0.1-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch:
Linux:
```shell
huggingface-cli download bartowski/starcoder2-15b-instruct-v0.1-exl2 --revision 6_5 --local-dir starcoder2-15b-instruct-v0.1-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
huggingface-cli download bartowski/starcoder2-15b-instruct-v0.1-exl2 --revision 6_5 --local-dir starcoder2-15b-instruct-v0.1-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski | {"license": "bigcode-openrail-m", "library_name": "transformers", "tags": ["code"], "datasets": ["bigcode/self-oss-instruct-sc2-exec-filter-50k"], "pipeline_tag": "text-generation", "base_model": "bigcode/starcoder2-15b", "quantized_by": "bartowski", "model-index": [{"name": "starcoder2-15b-instruct-v0.1", "results": [{"task": {"type": "text-generation"}, "dataset": {"name": "LiveCodeBench (code generation)", "type": "livecodebench-codegeneration"}, "metrics": [{"type": "pass@1", "value": 20.4}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "LiveCodeBench (self repair)", "type": "livecodebench-selfrepair"}, "metrics": [{"type": "pass@1", "value": 20.9}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "LiveCodeBench (test output prediction)", "type": "livecodebench-testoutputprediction"}, "metrics": [{"type": "pass@1", "value": 29.8}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "LiveCodeBench (code execution)", "type": "livecodebench-codeexecution"}, "metrics": [{"type": "pass@1", "value": 28.1}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "HumanEval", "type": "humaneval"}, "metrics": [{"type": "pass@1", "value": 72.6}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "HumanEval+", "type": "humanevalplus"}, "metrics": [{"type": "pass@1", "value": 63.4}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "MBPP", "type": "mbpp"}, "metrics": [{"type": "pass@1", "value": 75.2}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "MBPP+", "type": "mbppplus"}, "metrics": [{"type": "pass@1", "value": 61.2}]}, {"task": {"type": "text-generation"}, "dataset": {"name": "DS-1000", "type": "ds-1000"}, "metrics": [{"type": "pass@1", "value": 40.6}]}]}]} | bartowski/starcoder2-15b-instruct-v0.1-exl2 | null | [
"transformers",
"code",
"text-generation",
"dataset:bigcode/self-oss-instruct-sc2-exec-filter-50k",
"base_model:bigcode/starcoder2-15b",
"license:bigcode-openrail-m",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T01:28:20+00:00 | [] | [] | TAGS
#transformers #code #text-generation #dataset-bigcode/self-oss-instruct-sc2-exec-filter-50k #base_model-bigcode/starcoder2-15b #license-bigcode-openrail-m #model-index #endpoints_compatible #region-us
| Exllama v2 Quantizations of starcoder2-15b-instruct-v0.1
--------------------------------------------------------
Using <a href="URL ExLlamaV2 v0.0.20 for quantization.
**The "main" branch only contains the URL, download one of the other branches for the model (see below)**
Each branch contains an individual bits per weight, with the main one containing only the URL for further conversions.
Original model: URL
Prompt format
-------------
Available sizes
---------------
Download instructions
---------------------
With git:
With huggingface hub (credit to TheBloke for instructions):
To download a specific branch, use the '--revision' parameter. For example, to download the 6.5 bpw branch:
Linux:
Windows (which apparently doesn't like \_ in folders sometimes?):
Want to support my work? Visit my ko-fi page here: URL
| [] | [
"TAGS\n#transformers #code #text-generation #dataset-bigcode/self-oss-instruct-sc2-exec-filter-50k #base_model-bigcode/starcoder2-15b #license-bigcode-openrail-m #model-index #endpoints_compatible #region-us \n"
] | [
72
] | [
"TAGS\n#transformers #code #text-generation #dataset-bigcode/self-oss-instruct-sc2-exec-filter-50k #base_model-bigcode/starcoder2-15b #license-bigcode-openrail-m #model-index #endpoints_compatible #region-us \n"
] |
null | null | quantized_by: KnightCodin
---
## Exllama v2 Quantizations of <a href="https://huggingface.co/winglian/llama-3-8b-256k-PoSE"> winglian/llama-3-8b-256k-PoSE </a>
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original Model : https://huggingface.co/winglian/llama-3-8b-256k-PoSE
## Llama 3 8B 256K
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
This model uses [PoSE](https://huggingface.co/papers/2309.10400) to extend Llama's context length from 8k to 256k and beyond @ rope_theta: 500000.0.
For this model, we build upon our 64k model with 75M tokens of continued pretraining data from SlimPajama to extend the context to 256k @ rope_theta: 500k.
We have not been able to test the needle in haystack due to issues with inferencing at these long contexts.
Thanks to [Crusoe Energy](https://twitter.com/CrusoeEnergy) for the compute support for this model.
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original `llama3` codebase.
### Use with transformers
See the snippet below for usage with Transformers:
```python
>>> import transformers
>>> import torch
>>> model_id = "meta-llama/Meta-Llama-3-8B"
>>> pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
>>> pipeline("Hey how are you doing today?")
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3).
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
| {"language": ["en"], "license": "cc-by-nc-4.0"} | Knightcodin/Llama-3-8b-256k-PoSE-exl2 | null | [
"en",
"arxiv:2309.10400",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-30T01:29:50+00:00 | [
"2309.10400"
] | [
"en"
] | TAGS
#en #arxiv-2309.10400 #license-cc-by-nc-4.0 #region-us
| quantized\_by: KnightCodin
--------------------------
Exllama v2 Quantizations of <a href="URL winglian/llama-3-8b-256k-PoSE
----------------------------------------------------------------------
Using <a href="URL ExLlamaV2 v0.0.19 for quantization.
**The "main" branch only contains the URL, download one of the other branches for the model (see below)**
Each branch contains an individual bits per weight, with the main one containing only the URL for further conversions.
Original Model : URL
Llama 3 8B 256K
---------------
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
This model uses PoSE to extend Llama's context length from 8k to 256k and beyond @ rope\_theta: 500000.0.
For this model, we build upon our 64k model with 75M tokens of continued pretraining data from SlimPajama to extend the context to 256k @ rope\_theta: 500k.
We have not been able to test the needle in haystack due to issues with inferencing at these long contexts.
Thanks to Crusoe Energy for the compute support for this model.
Model Details
-------------
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date April 18, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: URL
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
Intended Use
------------
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
How to use
----------
This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original 'llama3' codebase.
### Use with transformers
See the snippet below for usage with Transformers:
### Use with 'llama3'
Please, follow the instructions in the repository.
To download Original checkpoints, see the example command below leveraging 'huggingface-cli':
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
Hardware and Software
---------------------
Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.
CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
-------------
Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
Benchmarks
----------
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.
### Base pretrained models
### Instruction tuned models
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
Safety
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
Refusals
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL
#### Critical risks
CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### Cyber Security
We have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.
### Child Safety
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.
Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.
Ethical Considerations and Limitations
--------------------------------------
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at URL
instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {URL
}
Contributors
------------
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
| [
"### Use with transformers\n\n\nSee the snippet below for usage with Transformers:",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository.\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
"TAGS\n#en #arxiv-2309.10400 #license-cc-by-nc-4.0 #region-us \n",
"### Use with transformers\n\n\nSee the snippet below for usage with Transformers:",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository.\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
29,
17,
430,
8,
6,
270,
280,
72,
115,
118,
126,
2136
] | [
"TAGS\n#en #arxiv-2309.10400 #license-cc-by-nc-4.0 #region-us \n### Use with transformers\n\n\nSee the snippet below for usage with Transformers:### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository.\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.### Base pretrained models### Instruction tuned models### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2737
- F1 Score: 0.9018
- Accuracy: 0.9021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3138 | 2.17 | 200 | 0.2870 | 0.8880 | 0.8877 |
| 0.2636 | 4.35 | 400 | 0.2759 | 0.8928 | 0.8925 |
| 0.2524 | 6.52 | 600 | 0.2649 | 0.8966 | 0.8966 |
| 0.2474 | 8.7 | 800 | 0.2766 | 0.8920 | 0.8919 |
| 0.2339 | 10.87 | 1000 | 0.2621 | 0.8897 | 0.8898 |
| 0.2282 | 13.04 | 1200 | 0.2823 | 0.8902 | 0.8898 |
| 0.2171 | 15.22 | 1400 | 0.2686 | 0.8955 | 0.8953 |
| 0.2085 | 17.39 | 1600 | 0.2772 | 0.8867 | 0.8864 |
| 0.2012 | 19.57 | 1800 | 0.2622 | 0.8958 | 0.8960 |
| 0.1931 | 21.74 | 2000 | 0.2746 | 0.8921 | 0.8919 |
| 0.1857 | 23.91 | 2200 | 0.2753 | 0.8950 | 0.8953 |
| 0.1829 | 26.09 | 2400 | 0.2679 | 0.8979 | 0.8980 |
| 0.173 | 28.26 | 2600 | 0.2834 | 0.8990 | 0.8994 |
| 0.1662 | 30.43 | 2800 | 0.2865 | 0.8966 | 0.8966 |
| 0.1585 | 32.61 | 3000 | 0.3245 | 0.8896 | 0.8905 |
| 0.1559 | 34.78 | 3200 | 0.3056 | 0.8907 | 0.8912 |
| 0.1499 | 36.96 | 3400 | 0.3101 | 0.8977 | 0.8980 |
| 0.1486 | 39.13 | 3600 | 0.2958 | 0.8984 | 0.8987 |
| 0.1419 | 41.3 | 3800 | 0.3143 | 0.8946 | 0.8946 |
| 0.1337 | 43.48 | 4000 | 0.3392 | 0.8877 | 0.8877 |
| 0.1375 | 45.65 | 4200 | 0.3398 | 0.8809 | 0.8816 |
| 0.1284 | 47.83 | 4400 | 0.3472 | 0.8835 | 0.8836 |
| 0.1238 | 50.0 | 4600 | 0.3613 | 0.8828 | 0.8836 |
| 0.1218 | 52.17 | 4800 | 0.3771 | 0.8831 | 0.8836 |
| 0.1196 | 54.35 | 5000 | 0.3853 | 0.8728 | 0.8734 |
| 0.1153 | 56.52 | 5200 | 0.3680 | 0.8841 | 0.8843 |
| 0.1127 | 58.7 | 5400 | 0.3492 | 0.8856 | 0.8857 |
| 0.1052 | 60.87 | 5600 | 0.3919 | 0.8751 | 0.8754 |
| 0.1057 | 63.04 | 5800 | 0.3935 | 0.8775 | 0.8775 |
| 0.1031 | 65.22 | 6000 | 0.4049 | 0.8781 | 0.8789 |
| 0.0991 | 67.39 | 6200 | 0.3886 | 0.8856 | 0.8857 |
| 0.0964 | 69.57 | 6400 | 0.3824 | 0.8787 | 0.8789 |
| 0.0955 | 71.74 | 6600 | 0.4175 | 0.8820 | 0.8823 |
| 0.0929 | 73.91 | 6800 | 0.4135 | 0.8833 | 0.8836 |
| 0.0905 | 76.09 | 7000 | 0.4160 | 0.8828 | 0.8830 |
| 0.0908 | 78.26 | 7200 | 0.4075 | 0.8790 | 0.8795 |
| 0.0873 | 80.43 | 7400 | 0.4152 | 0.8815 | 0.8816 |
| 0.0867 | 82.61 | 7600 | 0.4671 | 0.8724 | 0.8734 |
| 0.0867 | 84.78 | 7800 | 0.4273 | 0.8847 | 0.8850 |
| 0.0834 | 86.96 | 8000 | 0.4327 | 0.8799 | 0.8802 |
| 0.0809 | 89.13 | 8200 | 0.4389 | 0.8800 | 0.8802 |
| 0.0784 | 91.3 | 8400 | 0.4524 | 0.8738 | 0.8741 |
| 0.0773 | 93.48 | 8600 | 0.4755 | 0.8790 | 0.8795 |
| 0.0767 | 95.65 | 8800 | 0.4662 | 0.8825 | 0.8830 |
| 0.0781 | 97.83 | 9000 | 0.4542 | 0.8827 | 0.8830 |
| 0.0769 | 100.0 | 9200 | 0.4575 | 0.8774 | 0.8775 |
| 0.0726 | 102.17 | 9400 | 0.4654 | 0.8806 | 0.8809 |
| 0.074 | 104.35 | 9600 | 0.4733 | 0.8779 | 0.8782 |
| 0.0739 | 106.52 | 9800 | 0.4757 | 0.8770 | 0.8775 |
| 0.072 | 108.7 | 10000 | 0.4706 | 0.8792 | 0.8795 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H4-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:30:34+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H4-seqsight\_16384\_512\_56M-L8\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2737
* F1 Score: 0.9018
* Accuracy: 0.9021
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2550
- F1 Score: 0.9006
- Accuracy: 0.9008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.307 | 2.17 | 200 | 0.2865 | 0.8908 | 0.8905 |
| 0.258 | 4.35 | 400 | 0.2702 | 0.8904 | 0.8905 |
| 0.241 | 6.52 | 600 | 0.2569 | 0.8986 | 0.8987 |
| 0.2272 | 8.7 | 800 | 0.2783 | 0.8888 | 0.8884 |
| 0.2056 | 10.87 | 1000 | 0.2594 | 0.9048 | 0.9049 |
| 0.1931 | 13.04 | 1200 | 0.2890 | 0.8887 | 0.8884 |
| 0.1742 | 15.22 | 1400 | 0.2875 | 0.8975 | 0.8973 |
| 0.1601 | 17.39 | 1600 | 0.3076 | 0.8901 | 0.8898 |
| 0.1488 | 19.57 | 1800 | 0.3107 | 0.8916 | 0.8919 |
| 0.1382 | 21.74 | 2000 | 0.3345 | 0.8918 | 0.8919 |
| 0.1195 | 23.91 | 2200 | 0.3596 | 0.8890 | 0.8891 |
| 0.1125 | 26.09 | 2400 | 0.3816 | 0.8912 | 0.8912 |
| 0.1 | 28.26 | 2600 | 0.4127 | 0.8835 | 0.8836 |
| 0.0893 | 30.43 | 2800 | 0.4338 | 0.8850 | 0.8850 |
| 0.0802 | 32.61 | 3000 | 0.4783 | 0.8773 | 0.8782 |
| 0.0735 | 34.78 | 3200 | 0.4466 | 0.8735 | 0.8741 |
| 0.0695 | 36.96 | 3400 | 0.4774 | 0.8773 | 0.8775 |
| 0.0586 | 39.13 | 3600 | 0.5263 | 0.8751 | 0.8754 |
| 0.0569 | 41.3 | 3800 | 0.5288 | 0.8730 | 0.8727 |
| 0.0496 | 43.48 | 4000 | 0.6031 | 0.8752 | 0.8747 |
| 0.0486 | 45.65 | 4200 | 0.5492 | 0.8718 | 0.8720 |
| 0.0391 | 47.83 | 4400 | 0.5965 | 0.8761 | 0.8761 |
| 0.0374 | 50.0 | 4600 | 0.6584 | 0.8742 | 0.8747 |
| 0.036 | 52.17 | 4800 | 0.6468 | 0.8813 | 0.8816 |
| 0.032 | 54.35 | 5000 | 0.6886 | 0.8851 | 0.8850 |
| 0.0304 | 56.52 | 5200 | 0.6704 | 0.8845 | 0.8843 |
| 0.0298 | 58.7 | 5400 | 0.6396 | 0.8810 | 0.8809 |
| 0.0252 | 60.87 | 5600 | 0.6969 | 0.8839 | 0.8836 |
| 0.0253 | 63.04 | 5800 | 0.6920 | 0.8768 | 0.8768 |
| 0.0222 | 65.22 | 6000 | 0.7377 | 0.8810 | 0.8809 |
| 0.0229 | 67.39 | 6200 | 0.7602 | 0.8731 | 0.8727 |
| 0.0213 | 69.57 | 6400 | 0.7484 | 0.8762 | 0.8761 |
| 0.0223 | 71.74 | 6600 | 0.7040 | 0.8843 | 0.8843 |
| 0.0189 | 73.91 | 6800 | 0.7103 | 0.8817 | 0.8816 |
| 0.0156 | 76.09 | 7000 | 0.8209 | 0.8806 | 0.8802 |
| 0.0185 | 78.26 | 7200 | 0.7703 | 0.8811 | 0.8809 |
| 0.0164 | 80.43 | 7400 | 0.7721 | 0.8824 | 0.8823 |
| 0.0165 | 82.61 | 7600 | 0.7630 | 0.8778 | 0.8782 |
| 0.0147 | 84.78 | 7800 | 0.7728 | 0.8845 | 0.8843 |
| 0.0145 | 86.96 | 8000 | 0.7902 | 0.8743 | 0.8741 |
| 0.0127 | 89.13 | 8200 | 0.8076 | 0.8784 | 0.8782 |
| 0.0131 | 91.3 | 8400 | 0.8044 | 0.8858 | 0.8857 |
| 0.0118 | 93.48 | 8600 | 0.8129 | 0.8817 | 0.8816 |
| 0.0124 | 95.65 | 8800 | 0.7860 | 0.8823 | 0.8823 |
| 0.01 | 97.83 | 9000 | 0.8226 | 0.8866 | 0.8864 |
| 0.0112 | 100.0 | 9200 | 0.8501 | 0.8812 | 0.8809 |
| 0.0112 | 102.17 | 9400 | 0.8284 | 0.8879 | 0.8877 |
| 0.0107 | 104.35 | 9600 | 0.8299 | 0.8872 | 0.8871 |
| 0.0096 | 106.52 | 9800 | 0.8253 | 0.8822 | 0.8823 |
| 0.01 | 108.7 | 10000 | 0.8320 | 0.8865 | 0.8864 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H4-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:31:21+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H4-seqsight\_16384\_512\_56M-L32\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2550
* F1 Score: 0.9006
* Accuracy: 0.9008
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null | # ๐ LLaVA: Large Language and Vision Assistant
*Visual instruction tuning towards large language and vision models with GPT-4 level capabilities.*
[[Project Page](https://llava-vl.github.io/)] [[Paper](https://arxiv.org/abs/2304.08485)] [[Demo](https://llava.hliu.cc/)] [[Data](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K)] [[Model](https://huggingface.co/liuhaotian/LLaVA-13b-delta-v0)]
**Visual Instruction Tuning** <br>
[Haotian Liu*](https://hliu.cc), [Chunyuan Li*](https://chunyuan.li/), [Qingyang Wu](https://scholar.google.ca/citations?user=HDiw-TsAAAAJ&hl=en/), [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/) (*Equal Contribution)
<p align="center">
<a href="https://llava.hliu.cc/"><img src="images/llava_logo.png" width="50%"></a> <br>
Generated by <a href="https://gligen.github.io/">GLIGEN</a> via "a cute lava llama with glasses" and box prompt
</p>
## Release
- [7/19] ๐ฅ We release a major upgrade, including support for LLaMA-2, LoRA training, 4-/8-bit inference, higher resolution (336x336), and a lot more. We release [LLaVA Bench](https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_Bench.md) for benchmarking open-ended visual chat with results from Bard and Bing-Chat. We also support and verify training with RTX 3090 and RTX A6000. Check out [LLaVA-from-LLaMA-2](https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_from_LLaMA2.md), [release notes](https://github.com/haotian-liu/LLaVA/blob/main/docs/Release_Notes.md#7192023), and our [model zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)!
- [6/26] [CVPR 2023 Tutorial](https://vlp-tutorial.github.io/) on **Large Multimodal Models: Towards Building and Surpassing Multimodal GPT-4**! Please check out [[Slides](https://datarelease.blob.core.windows.net/tutorial/vision_foundation_models_2023/slides/Chunyuan_cvpr2023_tutorial_lmm.pdf)] [[Notes](https://arxiv.org/abs/2306.14895)] [[YouTube](https://youtu.be/mkI7EPD1vp8)] [[Bilibli](https://www.bilibili.com/video/BV1Ng4y1T7v3/)].
- [6/11] We released the preview for the mostly requested feature: DeepSpeed and LoRA support! Please see documentations [here](./docs/LoRA.md).
- [6/1] We released **LLaVA-Med: Large Language and Vision Assistant for Biomedicine**, a step towards building biomedical domain large language and vision models with GPT-4 level capabilities. Checkout the [paper](https://arxiv.org/abs/2306.00890) and [page](https://github.com/microsoft/LLaVA-Med).
- [5/13] Interested in quantifying the emerged **zero-shot OCR** performance of LLaVA and open-sourced LMM? Please check out the paper ["On the Hidden Mystery of OCR in Large Multimodal Models"](https://arxiv.org/abs/2305.07895), where LLaVA consistently outperforms miniGPT4 on 17 out of 18 datasets, despite LlaVA being trained with an order of magnitude smaller training data.
- [5/6] We are releasing [LLaVA-Lighting-MPT-7B-preview](https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview), based on MPT-7B-Chat! See [here](#LLaVA-MPT-7b) for more details.
- [5/2] ๐ฅ We are releasing LLaVA-Lighting! Train a lite, multimodal GPT-4 with just $40 in 3 hours! See [here](#train-llava-lightning) for more details.
- [5/2] We upgrade LLaVA package to v0.1 to support Vicuna v0 and v1 checkpoints, please upgrade following instructions [here](#install).
- [4/30] Our checkpoint with Vicuna-7b-v0 has been released [here](#llava-7b)! This checkpoint is more accessible and device friendly. Stay tuned for a major upgrade next week!
- [4/27] Thanks to the community effort, LLaVA-13B with 4-bit quantization allows you to run on a GPU with as few as 12GB VRAM! Try it out [here](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/llava).
- [4/17] ๐ฅ We released **LLaVA: Large Language and Vision Assistant**. We propose visual instruction tuning, towards building large language and vision models with GPT-4 level capabilities. Checkout the [paper](https://arxiv.org/abs/2304.08485) and [demo](https://llava.hliu.cc/).
<!-- <a href="https://llava.hliu.cc/"><img src="assets/demo.gif" width="70%"></a> -->
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
**Usage and License Notices**: The data, code and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
## Contents
- [Install](#install)
- [LLaVA Weights](#llava-weights)
- [Demo](#Demo)
- [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)
- [Dataset](https://github.com/haotian-liu/LLaVA/blob/main/docs/Data.md)
- [Train](#train)
- [Evaluation](#evaluation)
## Install
1. Clone this repository and navigate to LLaVA folder
```bash
git clone https://github.com/haotian-liu/LLaVA.git
cd LLaVA
```
2. Install Package
```Shell
conda create -n llava python=3.10 -y
conda activate llava
pip install --upgrade pip # enable PEP 660 support
pip install -e .
```
3. Install additional packages for training cases
```
pip install ninja
pip install flash-attn --no-build-isolation
```
### Upgrade to latest code base
```Shell
git pull
pip uninstall transformers
pip install -e .
```
## LLaVA Weights
We release [LLaVA](https://llava-vl.github.io/) weights as delta weights to comply with the LLaMA model license.
You can add our delta to the original LLaMA weights to obtain the LLaVA weights.
Instructions:
1. Get the original LLaMA weights in the huggingface format by following the instructions [here](https://huggingface.co/docs/transformers/main/model_doc/llama).
2. Use the following scripts to get LLaVA weights by applying our delta ([13b-v0](https://huggingface.co/liuhaotian/LLaVA-13b-delta-v0), [7b-v0](https://huggingface.co/liuhaotian/LLaVA-7b-delta-v0), [lightning-7B-v1-1](https://huggingface.co/liuhaotian/LLaVA-Lightning-7B-delta-v1-1)). It will automatically download delta weights from our Hugging Face account.
```bash
python3 -m llava.model.apply_delta \
--base /path/to/llama-7b \
--target /output/path/to/LLaVA-7B-v0 \
--delta liuhaotian/LLaVA-7b-delta-v0
```
## Demo
To run our demo, you need to prepare LLaVA checkpoints locally. Please follow the instructions [here](#llava-weights) to download the checkpoints.
### Gradio Web UI
To launch a Gradio demo locally, please run the following commands one by one. If you plan to launch multiple model workers to compare between different checkpoints, you only need to launch the controller and the web server *ONCE*.
#### Launch a controller
```Shell
python -m llava.serve.controller --host 0.0.0.0 --port 10000
```
#### Launch a gradio web server.
```Shell
python -m llava.serve.gradio_web_server --controller http://localhost:10000 --model-list-mode reload
```
You just launched the Gradio web interface. Now, you can open the web interface with the URL printed on the screen. You may notice that there is no model in the model list. Do not worry, as we have not launched any model worker yet. It will be automatically updated when you launch a model worker.
#### Launch a model worker
This is the actual *worker* that performs the inference on the GPU. Each worker is responsible for a single model specified in `--model-path`.
```Shell
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path ./checkpoints/LLaVA-13B-v0
```
Wait until the process finishes loading the model and you see "Uvicorn running on ...". Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.
You can launch as many workers as you want, and compare between different model checkpoints in the same Gradio interface. Please keep the `--controller` the same, and modify the `--port` and `--worker` to a different port number for each worker.
```Shell
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port <different from 40000, say 40001> --worker http://localhost:<change accordingly, i.e. 40001> --model-path <ckpt2>
```
#### Launch a model worker (Multiple GPUs, when GPU VRAM <= 24GB)
If your the VRAM of your GPU is less than 24GB (e.g., RTX 3090, RTX 4090, etc.), you may try running it with multiple GPUs.
```Shell
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path ./checkpoints/LLaVA-13B-v0 --num-gpus 2
```
### CLI Inference
A starting script for inference with LLaVA without the need of Gradio interface. The current implementation only supports for a single-turn Q-A session, and the interactive CLI is WIP. This also serves as an example for users to build customized inference scripts.
```Shell
python -m llava.eval.run_llava \
--model-name /path/to/LLaVA-13B-v0 \
--image-file "https://llava-vl.github.io/static/images/view.jpg" \
--query "What are the things I should be cautious about when I visit here?"
```
Example output (varies in different runs):
> When visiting this picturesque location with a serene lake and a wooden pier extending over the water, one should be cautious about various safety aspects. Some important considerations include:
>
> 1. Ensuring that the pier is structurally sound andstable, as old or weakened pier structures might not support the weight of visitors.
> 2. Being aware of the water depth around the pier and lake, as sudden drop-offs or strong currents may pose a risk to swimmers, boaters, or those who venture too close to the edge.
> 3. Staying vigilant about the presence of wildlife in the area, such as slippery, stealthy fish or other animals that might cause harm or inconvenience.
> 4. Maintaining a safe distance from the water's edge, particularly for children, elderly individuals, or those who are not strong swimmers.
> 5. Following any posted signs or guidelines related to safety and the use of the pier and surrounding areas.
>
> By considering these safety precautions, visitors can enjoy the natural beauty of the location while minimizing risks and ensuring a safe and pleasant experience.
## Train
LLaVA training consists of two stages: (1) feature alignment stage: use approximately 600K filtered CC3M to connect a *frozen pretrained* vision encoder to a *frozen LLM*; (2) visual instruction tuning stage: use 150K GPT-generated multimodal instruction-following to teach the model to follow multimodal instructions.
LLaVA is trained on 8 A100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the `per_device_train_batch_size` and increase the `gradient_accumulation_steps` accordingly. Always keep the global batch size the same: `per_device_train_batch_size` x `gradient_accumulation_steps`.
### Hyperparameters
We use a similar set of hyperparameters as Vicuna in finetuning. Both hyperparameters used in pretraining and finetuning are provided below.
1. Pretraining
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
| --- | ---: | ---: | ---: | ---: | ---: |
| LLaVA-13B | 128 | 2e-3 | 1 | 2048 | 0 |
2. Finetuning
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
| --- | ---: | ---: | ---: | ---: | ---: |
| LLaVA-13B | 32 | 2e-5 | 3 | 2048 | 0 |
### Prepare Vicuna checkpoints
Before you start, prepare our base model Vicuna, which is an instruction-tuned chatbot. Please download its weights [here](https://github.com/lm-sys/FastChat#model-weights).
Vicuna has two versions: v0 and v1, the main difference between them is the prompt of format. We support both. To ensure the best performance, you need to specify the correct prompt version corresponding to the weights you download: `v0` for `v0` weights, and `v1` for all Vicuna `v1.x` models.
### Pretrain (feature alignment)
Please download the subset of the CC3M dataset we use in the paper [here](https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K).
Pretrain takes around 4 hours for LLaVA-13B on 8x A100 (80G). It takes around 2 hours for 7B checkpoints.
```Shell
torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \
llava/train/train_mem.py \
--model_name_or_path ./checkpoints/vicuna-13b \
--version [v0 or v1] \
--data_path /path/to/cc3m_595k.json \
--image_folder /path/to/cc3m_595k \
--vision_tower openai/clip-vit-large-patch14 \
--tune_mm_mlp_adapter True \
--mm_vision_select_layer -2 \
--mm_use_im_start_end \
--bf16 True \
--output_dir ./checkpoints/llava-13b-pretrain \
--num_train_epochs 1 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 1 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2400 \
--save_total_limit 1 \
--learning_rate 2e-3 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True \
--report_to wandb
```
You may run this with a single A100 GPU with the following code. Please note that the `per_device_train_batch_size` * `gradient_accumulation_steps` should be equal to 128 to keep the global batch size the same.
<details>
<summary>Pretrain: LLaVA-13B, 1x A100 (80G). Time: ~33 hours.</summary>
```Shell
python llava/train/train_mem.py \
--model_name_or_path ./checkpoints/vicuna-13b \
--version [v0 or v1] \
--data_path /path/to/cc3m_595k.json \
--image_folder /path/to/cc3m_595k \
--vision_tower openai/clip-vit-large-patch14 \
--tune_mm_mlp_adapter True \
--mm_vision_select_layer -2 \
--mm_use_im_start_end \
--bf16 True \
--output_dir ./checkpoints/llava-13b-pretrain \
--num_train_epochs 1 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2400 \
--save_total_limit 1 \
--learning_rate 2e-3 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True \
--report_to wandb
```
</details>
<details>
<summary>Pretrain: LLaVA-7B, 1x A100 (80G/40G). Time: ~19 hours.</summary>
```Shell
python llava/train/train_mem.py \
--model_name_or_path ./checkpoints/vicuna-7b \
--version [v0 or v1] \
--data_path /path/to/cc3m_595k.json \
--image_folder /path/to/cc3m_595k \
--vision_tower openai/clip-vit-large-patch14 \
--tune_mm_mlp_adapter True \
--mm_vision_select_layer -2 \
--mm_use_im_start_end \
--bf16 True \
--output_dir ./checkpoints/llava-7b-pretrain \
--num_train_epochs 1 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2400 \
--save_total_limit 1 \
--learning_rate 2e-3 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True \
--report_to wandb
```
</details>
### Visual Instruction Tuning
1. Prepare data
Please download the annotation of our instruction tuning data [llava_instruct_158k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_instruct_150k.json), and download the COCO train2017 images [here](https://cocodataset.org/#download).
2. Extract projector features from the pretrained model from the feature alignment stage.
```Shell
python scripts/extract_mm_projector.py \
--model_name_or_path ./checkpoints/llava-13b-pretrain \
--output ./checkpoints/mm_projector/llava-13b-pretrain.bin
```
3. Start training!
You may download our pretrained `llava-13b-pretrain.bin` [here](https://huggingface.co/liuhaotian/LLaVA-Pretrained-Projectors/blob/main/LLaVA-13b-pretrain-projector-v0-CC3M-595K-original_caption.bin).
```Shell
torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \
llava/train/train_mem.py \
--model_name_or_path /path/to/vicuna-13b \
--version [v0 or v1] \
--data_path ./playground/data/llava_instruct_158k.json \
--image_folder /path/to/coco/train2017 \
--vision_tower openai/clip-vit-large-patch14 \
--pretrain_mm_mlp_adapter ./checkpoints/mm_projector/llava-13b-pretrain.bin \
--mm_vision_select_layer -2 \
--mm_use_im_start_end True \
--bf16 True \
--output_dir ./checkpoints/llava-13b-finetune \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 1 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 50000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--tf32 True \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--model_max_length 2048 \
--gradient_checkpointing True \
--dataloader_num_workers 4 \
--lazy_preprocess True \
--report_to wandb
```
</details>
### Lightning
*NOTE: When comparing to LLaVA-Lightning checkpoints in the paper, please use `LLaVA (Lightning)` instead of `LLaVA` as they use different set of training data and schedule.*
LLaVA-Lightning can be trained on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning. When using spot instances, it costs just ~$40.
For LLaVA Lightning, we create two distilled subset to ensure both a broad concept coverage, and the efficiency in training. Furthermore, we only perform instruction tuning for 1 epoch, in contrast to 3 epochs in the paper.
For pretraining, we create a concept-balanced subset of LAION-CC-SBU. It consists of 558K images. Download data [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain/tree/main).
For instruction tuning, we create a subset of LLaVA-Instruct-150K. It consists of 80K image-instruction pairs, consisting of 40K conversation and 40K complex reasoning data, with non-overlapping images. Download `llava_instruct_80k.json` [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_instruct_80k.json).
#### Hyperparameters
1. Pretraining
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
| --- | ---: | ---: | ---: | ---: | ---: |
| LLaVA-Lightning-7B | 128 | 2e-3 | 1 | 2048 | 0 |
2. Visual Instruction Tuning
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
| --- | ---: | ---: | ---: | ---: | ---: |
| LLaVA-Lightning-7B | 128 | 2e-5 | 1 | 2048 | 0 |
#### LLaVA-MPT-7b
Thanks to LLaVA-Lightning, we are able to train a checkpoint based on MPT-7b-Chat on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning.
*NOTE: When comparing to LLaVA-MPT-7B checkpoints in the paper, please use `LLaVA-MPT-7B (Lightning)` instead of `LLaVA` as they use different set of base LLM, training data and schedule.*
**NOTE**: This is a research preview of the LLaVA-Lightning based on MPT-7B-chat checkpoint. The usage of the model should comply with MPT-7B-chat license and agreements.
**NOTE**: Unlike other LLaVA models, this model should be used directly without delta weights conversion!
**NOTE**: You need to upgrade to our latest code base to use LLaVA-MPT-7b!
1. Usage
You do not need to download our checkpoint, it will directly load from our Hugging Face model: [`liuhaotian/LLaVA-Lightning-MPT-7B-preview`](https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview).
```Shell
python -m llava.serve.controller --host 0.0.0.0 --port 10000
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/LLaVA-Lightning-MPT-7B-preview
python -m llava.serve.gradio_web_server --controller http://localhost:10000
```
2. Training
We use the same set of training dataset, and the hyperparameters as other Lightning checkpoints.
### ScienceQA
**NOTE**: Due to that ScienceQA experiments were done earlier, the current checkpoints are trained *without* `<im_start>` and `<im_end>` tokens. Here we provide our training scripts for the current checkpoints.
<details>
<summary>1. Pretraining</summary>
```Shell
torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \
llava/train/train_mem.py \
--model_name_or_path ./checkpoints/llama-vicuna-13b \
--data_path /path/to/cc3m_595k.json \
--image_folder /path/to/cc3m_595k \
--vision_tower openai/clip-vit-large-patch14 \
--tune_mm_mlp_adapter True \
--mm_vision_select_layer -2 \
--bf16 True \
--output_dir ./checkpoints/llava-13b-pretrain-no_im_start_end_token \
--num_train_epochs 1 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 1 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2400 \
--save_total_limit 1 \
--learning_rate 2e-3 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True \
--report_to wandb
```
</details>
<details>
<summary>2. Extract projector features</summary>
```Shell
python scripts/extract_mm_projector.py \
--model_name_or_path ./checkpoints/llava-13b-pretrain-no_im_start_end_token \
--output ./checkpoints/mm_projector/llava-13b-pretrain-no_im_start_end_token.bin
```
</details>
<details>
<summary>3. Finetuning</summary>
You may download our pretrained `llava-13b-pretrain-no_im_start_end_token.bin` [here](https://huggingface.co/liuhaotian/LLaVA-13b-pretrain-projector-v0/blob/main/LLaVA-13b-pretrain-projector-v0-CC3M-595K-original_caption-no_im_token.bin).
```Shell
torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \
llava/train/train_mem.py \
--model_name_or_path /path/to/llama-vicuna-13b \
--data_path /path/to/scienceqa/llava_train_QCM-LEPA.json \
--image_folder /path/to/scienceqa/images/train \
--vision_tower openai/clip-vit-large-patch14 \
--pretrain_mm_mlp_adapter ./checkpoints/mm_projector/llava-13b-pretrain-no_im_start_end_token.bin \
--mm_vision_select_layer -2 \
--bf16 True \
--output_dir ./checkpoints/llava-13b-pretrain-no_im_start_end_token-finetune_scienceqa \
--num_train_epochs 12 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 1 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 5000 \
--save_total_limit 3 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--tf32 True \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True \
--report_to wandb
```
</details>
## Evaluation
### GPT-assisted Evaluation
Our GPT-assisted evaluation pipeline for multimodal modeling is provided for a comprehensive understanding of the capabilities of vision-language models. Please see our paper for more details.
1. Generate LLaVA responses
```Shell
python model_vqa.py \
--model-name ./checkpoints/LLaVA-13B-v0 \
--question-file \
playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \
--image-folder \
/path/to/coco2014_val \
--answers-file \
/path/to/answer-file-our.jsonl
```
2. Evaluate the generated responses. In our case, [`answer-file-ref.jsonl`](./playground/data/coco2014_val_qa_eval/qa90_gpt4_answer.jsonl) is the response generated by text-only GPT-4 (0314), with the context captions/boxes provided.
```Shell
OPENAI_API_KEY="sk-***********************************" python llava/eval/eval_gpt_review_visual.py \
--question playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \
--context llava/eval/table/caps_boxes_coco2014_val_80.jsonl \
--answer-list \
/path/to/answer-file-ref.jsonl \
/path/to/answer-file-our.jsonl \
--rule llava/eval/table/rule.json \
--output /path/to/review.json
```
3. Summarize the evaluation results
```Shell
python summarize_gpt_review.py
```
### ScienceQA
#### Prepare Data
1. Please see ScienceQA [repo](https://github.com/lupantech/ScienceQA) for setting up the dataset.
2. Generate ScienceQA dataset for LLaVA conversation-style format.
```Shell
python scripts/convert_sqa_to_llava \
convert_to_llava \
--base-dir /path/to/ScienceQA/data/scienceqa \
--split {train,val,minival,test,minitest}
```
#### Evaluation
1. Download our pretrained LLaVA-13B (delta) weights for ScienceQA dataset [here](https://huggingface.co/liuhaotian/LLaVA-13b-delta-v0-science_qa). Convert the delta weights to actual weights.
```Shell
python -m llava.model.apply_delta \
--base /path/to/llama-13b \
--target /path/to/LLaVA-13b-v0-science_qa \
--delta liuhaotian/LLaVA-13b-delta-v0-science_qa
```
2. [Option 1] Multiple-GPU inference
You may evaluate this with multiple GPUs, and concatenate the generated jsonl files. Please refer to our script for [batch evaluation](scripts/sqa_eval_batch.sh) and [results gathering](scripts/sqa_eval_gather.sh).
3. [Option 2] Single-GPU inference
(a) Generate LLaVA responses on ScienceQA dataset
```Shell
python -m llava.eval.model_vqa_science \
--model-name /path/to/LLaVA-13b-v0-science_qa \
--question-file /path/to/ScienceQA/data/scienceqa/llava_test.json \
--image-folder /path/to/ScienceQA/data/scienceqa/images/test \
--answers-file vqa/results/ScienceQA/test_llava-13b.jsonl \
--answer-prompter \
--conv-mode llava_v0
```
(b) Evaluate the generated responses
```Shell
python eval_science_qa.py \
--base-dir /path/to/ScienceQA/data/scienceqa \
--result-file vqa/results/ScienceQA/test_llava-13b.jsonl \
--output-file vqa/results/ScienceQA/test_llava-13b_output.json \
--output-result vqa/results/ScienceQA/test_llava-13b_result.json \
```
For reference, we attach our prediction file `test_llava-13b_result.json` [here](llava/eval/table/results/test_sqa_llava_13b_v0.json) for comparison when reproducing our results, as well as for further analysis in detail.
## Citation
If you find LLaVA useful for your your research and applications, please cite using this BibTeX:
```bibtex
@misc{liu2023llava,
title={Visual Instruction Tuning},
author={Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae},
publisher={arXiv:2304.08485},
year={2023},
}
```
## Acknowledgement
- [Vicuna](https://github.com/lm-sys/FastChat): the codebase we built upon, and our base model Vicuna-13B that has the amazing language capabilities!
## Related Projects
- [Instruction Tuning with GPT-4](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
- [LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day](https://github.com/microsoft/LLaVA-Med)
- [Otter: In-Context Multi-Modal Instruction Tuning](https://github.com/Luodian/Otter)
For future project ideas, pleae check out:
- [SEEM: Segment Everything Everywhere All at Once](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once)
- [Grounded-Segment-Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything) to detect, segment, and generate anything by marrying [Grounding DINO](https://github.com/IDEA-Research/GroundingDINO) and [Segment-Anything](https://github.com/facebookresearch/segment-anything).
| {} | multitensor/mistal-llava | null | [
"safetensors",
"arxiv:2304.08485",
"arxiv:2306.14895",
"arxiv:2306.00890",
"arxiv:2305.07895",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T01:32:33+00:00 | [
"2304.08485",
"2306.14895",
"2306.00890",
"2305.07895"
] | [] | TAGS
#safetensors #arxiv-2304.08485 #arxiv-2306.14895 #arxiv-2306.00890 #arxiv-2305.07895 #endpoints_compatible #region-us
| LLaVA: Large Language and Vision Assistant
==========================================
*Visual instruction tuning towards large language and vision models with GPT-4 level capabilities.*
[Project Page] [Paper] [Demo] [Data] [Model]
Visual Instruction Tuning
Haotian Liu\*, Chunyuan Li\*, Qingyang Wu, Yong Jae Lee (\*Equal Contribution)
Generated by [Release
-------
* [7/19] We release a major upgrade, including support for LLaMA-2, LoRA training, 4-/8-bit inference, higher resolution (336x336), and a lot more. We release LLaVA Bench for benchmarking open-ended visual chat with results from Bard and Bing-Chat. We also support and verify training with RTX 3090 and RTX A6000. Check out LLaVA-from-LLaMA-2, release notes, and our model zoo!
* [6/26] CVPR 2023 Tutorial on Large Multimodal Models: Towards Building and Surpassing Multimodal GPT-4! Please check out [Slides] [Notes] [YouTube] [Bilibli].
* [6/11] We released the preview for the mostly requested feature: DeepSpeed and LoRA support! Please see documentations here.
* [6/1] We released LLaVA-Med: Large Language and Vision Assistant for Biomedicine, a step towards building biomedical domain large language and vision models with GPT-4 level capabilities. Checkout the paper and page.
* [5/13] Interested in quantifying the emerged zero-shot OCR performance of LLaVA and open-sourced LMM? Please check out the paper "On the Hidden Mystery of OCR in Large Multimodal Models", where LLaVA consistently outperforms miniGPT4 on 17 out of 18 datasets, despite LlaVA being trained with an order of magnitude smaller training data.
* [5/6] We are releasing LLaVA-Lighting-MPT-7B-preview, based on MPT-7B-Chat! See here for more details.
* [5/2] We are releasing LLaVA-Lighting! Train a lite, multimodal GPT-4 with just $40 in 3 hours! See here for more details.
* [5/2] We upgrade LLaVA package to v0.1 to support Vicuna v0 and v1 checkpoints, please upgrade following instructions here.
* [4/30] Our checkpoint with Vicuna-7b-v0 has been released here! This checkpoint is more accessible and device friendly. Stay tuned for a major upgrade next week!
* [4/27] Thanks to the community effort, LLaVA-13B with 4-bit quantization allows you to run on a GPU with as few as 12GB VRAM! Try it out here.
* [4/17] We released LLaVA: Large Language and Vision Assistant. We propose visual instruction tuning, towards building large language and vision models with GPT-4 level capabilities. Checkout the paper and demo.
 and models trained using the dataset should not be used outside of research purposes.
Contents
--------
* Install
* LLaVA Weights
* Demo
* Model Zoo
* Dataset
* Train
* Evaluation
Install
-------
1. Clone this repository and navigate to LLaVA folder
2. Install Package
3. Install additional packages for training cases
### Upgrade to latest code base
LLaVA Weights
-------------
We release LLaVA weights as delta weights to comply with the LLaMA model license.
You can add our delta to the original LLaMA weights to obtain the LLaVA weights.
Instructions:
1. Get the original LLaMA weights in the huggingface format by following the instructions here.
2. Use the following scripts to get LLaVA weights by applying our delta (13b-v0, 7b-v0, lightning-7B-v1-1). It will automatically download delta weights from our Hugging Face account.
Demo
----
To run our demo, you need to prepare LLaVA checkpoints locally. Please follow the instructions here to download the checkpoints.
### Gradio Web UI
To launch a Gradio demo locally, please run the following commands one by one. If you plan to launch multiple model workers to compare between different checkpoints, you only need to launch the controller and the web server *ONCE*.
#### Launch a controller
#### Launch a gradio web server.
You just launched the Gradio web interface. Now, you can open the web interface with the URL printed on the screen. You may notice that there is no model in the model list. Do not worry, as we have not launched any model worker yet. It will be automatically updated when you launch a model worker.
#### Launch a model worker
This is the actual *worker* that performs the inference on the GPU. Each worker is responsible for a single model specified in '--model-path'.
Wait until the process finishes loading the model and you see "Uvicorn running on ...". Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.
You can launch as many workers as you want, and compare between different model checkpoints in the same Gradio interface. Please keep the '--controller' the same, and modify the '--port' and '--worker' to a different port number for each worker.
#### Launch a model worker (Multiple GPUs, when GPU VRAM <= 24GB)
If your the VRAM of your GPU is less than 24GB (e.g., RTX 3090, RTX 4090, etc.), you may try running it with multiple GPUs.
### CLI Inference
A starting script for inference with LLaVA without the need of Gradio interface. The current implementation only supports for a single-turn Q-A session, and the interactive CLI is WIP. This also serves as an example for users to build customized inference scripts.
Example output (varies in different runs):
>
> When visiting this picturesque location with a serene lake and a wooden pier extending over the water, one should be cautious about various safety aspects. Some important considerations include:
>
>
> 1. Ensuring that the pier is structurally sound andstable, as old or weakened pier structures might not support the weight of visitors.
> 2. Being aware of the water depth around the pier and lake, as sudden drop-offs or strong currents may pose a risk to swimmers, boaters, or those who venture too close to the edge.
> 3. Staying vigilant about the presence of wildlife in the area, such as slippery, stealthy fish or other animals that might cause harm or inconvenience.
> 4. Maintaining a safe distance from the water's edge, particularly for children, elderly individuals, or those who are not strong swimmers.
> 5. Following any posted signs or guidelines related to safety and the use of the pier and surrounding areas.
>
>
> By considering these safety precautions, visitors can enjoy the natural beauty of the location while minimizing risks and ensuring a safe and pleasant experience.
>
>
>
Train
-----
LLaVA training consists of two stages: (1) feature alignment stage: use approximately 600K filtered CC3M to connect a *frozen pretrained* vision encoder to a *frozen LLM*; (2) visual instruction tuning stage: use 150K GPT-generated multimodal instruction-following to teach the model to follow multimodal instructions.
LLaVA is trained on 8 A100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the 'per\_device\_train\_batch\_size' and increase the 'gradient\_accumulation\_steps' accordingly. Always keep the global batch size the same: 'per\_device\_train\_batch\_size' x 'gradient\_accumulation\_steps'.
### Hyperparameters
We use a similar set of hyperparameters as Vicuna in finetuning. Both hyperparameters used in pretraining and finetuning are provided below.
1. Pretraining
2. Finetuning
### Prepare Vicuna checkpoints
Before you start, prepare our base model Vicuna, which is an instruction-tuned chatbot. Please download its weights here.
Vicuna has two versions: v0 and v1, the main difference between them is the prompt of format. We support both. To ensure the best performance, you need to specify the correct prompt version corresponding to the weights you download: 'v0' for 'v0' weights, and 'v1' for all Vicuna 'v1.x' models.
### Pretrain (feature alignment)
Please download the subset of the CC3M dataset we use in the paper here.
Pretrain takes around 4 hours for LLaVA-13B on 8x A100 (80G). It takes around 2 hours for 7B checkpoints.
You may run this with a single A100 GPU with the following code. Please note that the 'per\_device\_train\_batch\_size' \* 'gradient\_accumulation\_steps' should be equal to 128 to keep the global batch size the same.
Pretrain: LLaVA-13B, 1x A100 (80G). Time: ~33 hours.
Pretrain: LLaVA-7B, 1x A100 (80G/40G). Time: ~19 hours.
### Visual Instruction Tuning
1. Prepare data
Please download the annotation of our instruction tuning data llava\_instruct\_158k.json, and download the COCO train2017 images here.
2. Extract projector features from the pretrained model from the feature alignment stage.
3. Start training!
You may download our pretrained 'URL' here.
### Lightning
*NOTE: When comparing to LLaVA-Lightning checkpoints in the paper, please use 'LLaVA (Lightning)' instead of 'LLaVA' as they use different set of training data and schedule.*
LLaVA-Lightning can be trained on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning. When using spot instances, it costs just ~$40.
For LLaVA Lightning, we create two distilled subset to ensure both a broad concept coverage, and the efficiency in training. Furthermore, we only perform instruction tuning for 1 epoch, in contrast to 3 epochs in the paper.
For pretraining, we create a concept-balanced subset of LAION-CC-SBU. It consists of 558K images. Download data here.
For instruction tuning, we create a subset of LLaVA-Instruct-150K. It consists of 80K image-instruction pairs, consisting of 40K conversation and 40K complex reasoning data, with non-overlapping images. Download 'llava\_instruct\_80k.json' here.
#### Hyperparameters
1. Pretraining
2. Visual Instruction Tuning
#### LLaVA-MPT-7b
Thanks to LLaVA-Lightning, we are able to train a checkpoint based on MPT-7b-Chat on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning.
*NOTE: When comparing to LLaVA-MPT-7B checkpoints in the paper, please use 'LLaVA-MPT-7B (Lightning)' instead of 'LLaVA' as they use different set of base LLM, training data and schedule.*
NOTE: This is a research preview of the LLaVA-Lightning based on MPT-7B-chat checkpoint. The usage of the model should comply with MPT-7B-chat license and agreements.
NOTE: Unlike other LLaVA models, this model should be used directly without delta weights conversion!
NOTE: You need to upgrade to our latest code base to use LLaVA-MPT-7b!
1. Usage
You do not need to download our checkpoint, it will directly load from our Hugging Face model: 'liuhaotian/LLaVA-Lightning-MPT-7B-preview'.
2. Training
We use the same set of training dataset, and the hyperparameters as other Lightning checkpoints.
### ScienceQA
NOTE: Due to that ScienceQA experiments were done earlier, the current checkpoints are trained *without* '<im\_start>' and '<im\_end>' tokens. Here we provide our training scripts for the current checkpoints.
1. Pretraining
2. Extract projector features
3. Finetuning
You may download our pretrained 'llava-13b-pretrain-no\_im\_start\_end\_token.bin' here.
Evaluation
----------
### GPT-assisted Evaluation
Our GPT-assisted evaluation pipeline for multimodal modeling is provided for a comprehensive understanding of the capabilities of vision-language models. Please see our paper for more details.
1. Generate LLaVA responses
2. Evaluate the generated responses. In our case, 'URL' is the response generated by text-only GPT-4 (0314), with the context captions/boxes provided.
3. Summarize the evaluation results
### ScienceQA
#### Prepare Data
1. Please see ScienceQA repo for setting up the dataset.
2. Generate ScienceQA dataset for LLaVA conversation-style format.
#### Evaluation
1. Download our pretrained LLaVA-13B (delta) weights for ScienceQA dataset here. Convert the delta weights to actual weights.
2. [Option 1] Multiple-GPU inference
You may evaluate this with multiple GPUs, and concatenate the generated jsonl files. Please refer to our script for batch evaluation and results gathering.
3. [Option 2] Single-GPU inference
(a) Generate LLaVA responses on ScienceQA dataset
(b) Evaluate the generated responses
For reference, we attach our prediction file 'test\_llava-13b\_result.json' here for comparison when reproducing our results, as well as for further analysis in detail.
If you find LLaVA useful for your your research and applications, please cite using this BibTeX:
Acknowledgement
---------------
* Vicuna: the codebase we built upon, and our base model Vicuna-13B that has the amazing language capabilities!
Related Projects
----------------
* Instruction Tuning with GPT-4
* LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day
* Otter: In-Context Multi-Modal Instruction Tuning
For future project ideas, pleae check out:
* SEEM: Segment Everything Everywhere All at Once
* Grounded-Segment-Anything to detect, segment, and generate anything by marrying Grounding DINO and Segment-Anything.](URL via )
| [
"### Upgrade to latest code base\n\n\nLLaVA Weights\n-------------\n\n\nWe release LLaVA weights as delta weights to comply with the LLaMA model license.\nYou can add our delta to the original LLaMA weights to obtain the LLaVA weights.\n\n\nInstructions:\n\n\n1. Get the original LLaMA weights in the huggingface format by following the instructions here.\n2. Use the following scripts to get LLaVA weights by applying our delta (13b-v0, 7b-v0, lightning-7B-v1-1). It will automatically download delta weights from our Hugging Face account.\n\n\nDemo\n----\n\n\nTo run our demo, you need to prepare LLaVA checkpoints locally. Please follow the instructions here to download the checkpoints.",
"### Gradio Web UI\n\n\nTo launch a Gradio demo locally, please run the following commands one by one. If you plan to launch multiple model workers to compare between different checkpoints, you only need to launch the controller and the web server *ONCE*.",
"#### Launch a controller",
"#### Launch a gradio web server.\n\n\nYou just launched the Gradio web interface. Now, you can open the web interface with the URL printed on the screen. You may notice that there is no model in the model list. Do not worry, as we have not launched any model worker yet. It will be automatically updated when you launch a model worker.",
"#### Launch a model worker\n\n\nThis is the actual *worker* that performs the inference on the GPU. Each worker is responsible for a single model specified in '--model-path'.\n\n\nWait until the process finishes loading the model and you see \"Uvicorn running on ...\". Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.\n\n\nYou can launch as many workers as you want, and compare between different model checkpoints in the same Gradio interface. Please keep the '--controller' the same, and modify the '--port' and '--worker' to a different port number for each worker.",
"#### Launch a model worker (Multiple GPUs, when GPU VRAM <= 24GB)\n\n\nIf your the VRAM of your GPU is less than 24GB (e.g., RTX 3090, RTX 4090, etc.), you may try running it with multiple GPUs.",
"### CLI Inference\n\n\nA starting script for inference with LLaVA without the need of Gradio interface. The current implementation only supports for a single-turn Q-A session, and the interactive CLI is WIP. This also serves as an example for users to build customized inference scripts.\n\n\nExample output (varies in different runs):\n\n\n\n> \n> When visiting this picturesque location with a serene lake and a wooden pier extending over the water, one should be cautious about various safety aspects. Some important considerations include:\n> \n> \n> 1. Ensuring that the pier is structurally sound andstable, as old or weakened pier structures might not support the weight of visitors.\n> 2. Being aware of the water depth around the pier and lake, as sudden drop-offs or strong currents may pose a risk to swimmers, boaters, or those who venture too close to the edge.\n> 3. Staying vigilant about the presence of wildlife in the area, such as slippery, stealthy fish or other animals that might cause harm or inconvenience.\n> 4. Maintaining a safe distance from the water's edge, particularly for children, elderly individuals, or those who are not strong swimmers.\n> 5. Following any posted signs or guidelines related to safety and the use of the pier and surrounding areas.\n> \n> \n> By considering these safety precautions, visitors can enjoy the natural beauty of the location while minimizing risks and ensuring a safe and pleasant experience.\n> \n> \n> \n\n\nTrain\n-----\n\n\nLLaVA training consists of two stages: (1) feature alignment stage: use approximately 600K filtered CC3M to connect a *frozen pretrained* vision encoder to a *frozen LLM*; (2) visual instruction tuning stage: use 150K GPT-generated multimodal instruction-following to teach the model to follow multimodal instructions.\n\n\nLLaVA is trained on 8 A100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the 'per\\_device\\_train\\_batch\\_size' and increase the 'gradient\\_accumulation\\_steps' accordingly. Always keep the global batch size the same: 'per\\_device\\_train\\_batch\\_size' x 'gradient\\_accumulation\\_steps'.",
"### Hyperparameters\n\n\nWe use a similar set of hyperparameters as Vicuna in finetuning. Both hyperparameters used in pretraining and finetuning are provided below.\n\n\n1. Pretraining\n\n\n\n2. Finetuning",
"### Prepare Vicuna checkpoints\n\n\nBefore you start, prepare our base model Vicuna, which is an instruction-tuned chatbot. Please download its weights here.\n\n\nVicuna has two versions: v0 and v1, the main difference between them is the prompt of format. We support both. To ensure the best performance, you need to specify the correct prompt version corresponding to the weights you download: 'v0' for 'v0' weights, and 'v1' for all Vicuna 'v1.x' models.",
"### Pretrain (feature alignment)\n\n\nPlease download the subset of the CC3M dataset we use in the paper here.\n\n\nPretrain takes around 4 hours for LLaVA-13B on 8x A100 (80G). It takes around 2 hours for 7B checkpoints.\n\n\nYou may run this with a single A100 GPU with the following code. Please note that the 'per\\_device\\_train\\_batch\\_size' \\* 'gradient\\_accumulation\\_steps' should be equal to 128 to keep the global batch size the same.\n\n\n\nPretrain: LLaVA-13B, 1x A100 (80G). Time: ~33 hours.\n\n\nPretrain: LLaVA-7B, 1x A100 (80G/40G). Time: ~19 hours.",
"### Visual Instruction Tuning\n\n\n1. Prepare data\n\n\nPlease download the annotation of our instruction tuning data llava\\_instruct\\_158k.json, and download the COCO train2017 images here.\n\n\n2. Extract projector features from the pretrained model from the feature alignment stage.\n3. Start training!\n\n\nYou may download our pretrained 'URL' here.",
"### Lightning\n\n\n*NOTE: When comparing to LLaVA-Lightning checkpoints in the paper, please use 'LLaVA (Lightning)' instead of 'LLaVA' as they use different set of training data and schedule.*\n\n\nLLaVA-Lightning can be trained on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning. When using spot instances, it costs just ~$40.\n\n\nFor LLaVA Lightning, we create two distilled subset to ensure both a broad concept coverage, and the efficiency in training. Furthermore, we only perform instruction tuning for 1 epoch, in contrast to 3 epochs in the paper.\n\n\nFor pretraining, we create a concept-balanced subset of LAION-CC-SBU. It consists of 558K images. Download data here.\n\n\nFor instruction tuning, we create a subset of LLaVA-Instruct-150K. It consists of 80K image-instruction pairs, consisting of 40K conversation and 40K complex reasoning data, with non-overlapping images. Download 'llava\\_instruct\\_80k.json' here.",
"#### Hyperparameters\n\n\n1. Pretraining\n\n\n\n2. Visual Instruction Tuning",
"#### LLaVA-MPT-7b\n\n\nThanks to LLaVA-Lightning, we are able to train a checkpoint based on MPT-7b-Chat on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning.\n\n\n*NOTE: When comparing to LLaVA-MPT-7B checkpoints in the paper, please use 'LLaVA-MPT-7B (Lightning)' instead of 'LLaVA' as they use different set of base LLM, training data and schedule.*\n\n\nNOTE: This is a research preview of the LLaVA-Lightning based on MPT-7B-chat checkpoint. The usage of the model should comply with MPT-7B-chat license and agreements.\n\n\nNOTE: Unlike other LLaVA models, this model should be used directly without delta weights conversion!\n\n\nNOTE: You need to upgrade to our latest code base to use LLaVA-MPT-7b!\n\n\n1. Usage\n\n\nYou do not need to download our checkpoint, it will directly load from our Hugging Face model: 'liuhaotian/LLaVA-Lightning-MPT-7B-preview'.\n\n\n2. Training\n\n\nWe use the same set of training dataset, and the hyperparameters as other Lightning checkpoints.",
"### ScienceQA\n\n\nNOTE: Due to that ScienceQA experiments were done earlier, the current checkpoints are trained *without* '<im\\_start>' and '<im\\_end>' tokens. Here we provide our training scripts for the current checkpoints.\n\n\n\n1. Pretraining\n\n\n2. Extract projector features\n\n\n3. Finetuning\nYou may download our pretrained 'llava-13b-pretrain-no\\_im\\_start\\_end\\_token.bin' here.\n\n\n\nEvaluation\n----------",
"### GPT-assisted Evaluation\n\n\nOur GPT-assisted evaluation pipeline for multimodal modeling is provided for a comprehensive understanding of the capabilities of vision-language models. Please see our paper for more details.\n\n\n1. Generate LLaVA responses\n2. Evaluate the generated responses. In our case, 'URL' is the response generated by text-only GPT-4 (0314), with the context captions/boxes provided.\n3. Summarize the evaluation results",
"### ScienceQA",
"#### Prepare Data\n\n\n1. Please see ScienceQA repo for setting up the dataset.\n2. Generate ScienceQA dataset for LLaVA conversation-style format.",
"#### Evaluation\n\n\n1. Download our pretrained LLaVA-13B (delta) weights for ScienceQA dataset here. Convert the delta weights to actual weights.\n2. [Option 1] Multiple-GPU inference\nYou may evaluate this with multiple GPUs, and concatenate the generated jsonl files. Please refer to our script for batch evaluation and results gathering.\n3. [Option 2] Single-GPU inference\n\n\n(a) Generate LLaVA responses on ScienceQA dataset\n\n\n(b) Evaluate the generated responses\n\n\nFor reference, we attach our prediction file 'test\\_llava-13b\\_result.json' here for comparison when reproducing our results, as well as for further analysis in detail.\n\n\nIf you find LLaVA useful for your your research and applications, please cite using this BibTeX:\n\n\nAcknowledgement\n---------------\n\n\n* Vicuna: the codebase we built upon, and our base model Vicuna-13B that has the amazing language capabilities!\n\n\nRelated Projects\n----------------\n\n\n* Instruction Tuning with GPT-4\n* LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day\n* Otter: In-Context Multi-Modal Instruction Tuning\n\n\nFor future project ideas, pleae check out:\n\n\n* SEEM: Segment Everything Everywhere All at Once\n* Grounded-Segment-Anything to detect, segment, and generate anything by marrying Grounding DINO and Segment-Anything.](URL via )"
] | [
"TAGS\n#safetensors #arxiv-2304.08485 #arxiv-2306.14895 #arxiv-2306.00890 #arxiv-2305.07895 #endpoints_compatible #region-us \n",
"### Upgrade to latest code base\n\n\nLLaVA Weights\n-------------\n\n\nWe release LLaVA weights as delta weights to comply with the LLaMA model license.\nYou can add our delta to the original LLaMA weights to obtain the LLaVA weights.\n\n\nInstructions:\n\n\n1. Get the original LLaMA weights in the huggingface format by following the instructions here.\n2. Use the following scripts to get LLaVA weights by applying our delta (13b-v0, 7b-v0, lightning-7B-v1-1). It will automatically download delta weights from our Hugging Face account.\n\n\nDemo\n----\n\n\nTo run our demo, you need to prepare LLaVA checkpoints locally. Please follow the instructions here to download the checkpoints.",
"### Gradio Web UI\n\n\nTo launch a Gradio demo locally, please run the following commands one by one. If you plan to launch multiple model workers to compare between different checkpoints, you only need to launch the controller and the web server *ONCE*.",
"#### Launch a controller",
"#### Launch a gradio web server.\n\n\nYou just launched the Gradio web interface. Now, you can open the web interface with the URL printed on the screen. You may notice that there is no model in the model list. Do not worry, as we have not launched any model worker yet. It will be automatically updated when you launch a model worker.",
"#### Launch a model worker\n\n\nThis is the actual *worker* that performs the inference on the GPU. Each worker is responsible for a single model specified in '--model-path'.\n\n\nWait until the process finishes loading the model and you see \"Uvicorn running on ...\". Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.\n\n\nYou can launch as many workers as you want, and compare between different model checkpoints in the same Gradio interface. Please keep the '--controller' the same, and modify the '--port' and '--worker' to a different port number for each worker.",
"#### Launch a model worker (Multiple GPUs, when GPU VRAM <= 24GB)\n\n\nIf your the VRAM of your GPU is less than 24GB (e.g., RTX 3090, RTX 4090, etc.), you may try running it with multiple GPUs.",
"### CLI Inference\n\n\nA starting script for inference with LLaVA without the need of Gradio interface. The current implementation only supports for a single-turn Q-A session, and the interactive CLI is WIP. This also serves as an example for users to build customized inference scripts.\n\n\nExample output (varies in different runs):\n\n\n\n> \n> When visiting this picturesque location with a serene lake and a wooden pier extending over the water, one should be cautious about various safety aspects. Some important considerations include:\n> \n> \n> 1. Ensuring that the pier is structurally sound andstable, as old or weakened pier structures might not support the weight of visitors.\n> 2. Being aware of the water depth around the pier and lake, as sudden drop-offs or strong currents may pose a risk to swimmers, boaters, or those who venture too close to the edge.\n> 3. Staying vigilant about the presence of wildlife in the area, such as slippery, stealthy fish or other animals that might cause harm or inconvenience.\n> 4. Maintaining a safe distance from the water's edge, particularly for children, elderly individuals, or those who are not strong swimmers.\n> 5. Following any posted signs or guidelines related to safety and the use of the pier and surrounding areas.\n> \n> \n> By considering these safety precautions, visitors can enjoy the natural beauty of the location while minimizing risks and ensuring a safe and pleasant experience.\n> \n> \n> \n\n\nTrain\n-----\n\n\nLLaVA training consists of two stages: (1) feature alignment stage: use approximately 600K filtered CC3M to connect a *frozen pretrained* vision encoder to a *frozen LLM*; (2) visual instruction tuning stage: use 150K GPT-generated multimodal instruction-following to teach the model to follow multimodal instructions.\n\n\nLLaVA is trained on 8 A100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the 'per\\_device\\_train\\_batch\\_size' and increase the 'gradient\\_accumulation\\_steps' accordingly. Always keep the global batch size the same: 'per\\_device\\_train\\_batch\\_size' x 'gradient\\_accumulation\\_steps'.",
"### Hyperparameters\n\n\nWe use a similar set of hyperparameters as Vicuna in finetuning. Both hyperparameters used in pretraining and finetuning are provided below.\n\n\n1. Pretraining\n\n\n\n2. Finetuning",
"### Prepare Vicuna checkpoints\n\n\nBefore you start, prepare our base model Vicuna, which is an instruction-tuned chatbot. Please download its weights here.\n\n\nVicuna has two versions: v0 and v1, the main difference between them is the prompt of format. We support both. To ensure the best performance, you need to specify the correct prompt version corresponding to the weights you download: 'v0' for 'v0' weights, and 'v1' for all Vicuna 'v1.x' models.",
"### Pretrain (feature alignment)\n\n\nPlease download the subset of the CC3M dataset we use in the paper here.\n\n\nPretrain takes around 4 hours for LLaVA-13B on 8x A100 (80G). It takes around 2 hours for 7B checkpoints.\n\n\nYou may run this with a single A100 GPU with the following code. Please note that the 'per\\_device\\_train\\_batch\\_size' \\* 'gradient\\_accumulation\\_steps' should be equal to 128 to keep the global batch size the same.\n\n\n\nPretrain: LLaVA-13B, 1x A100 (80G). Time: ~33 hours.\n\n\nPretrain: LLaVA-7B, 1x A100 (80G/40G). Time: ~19 hours.",
"### Visual Instruction Tuning\n\n\n1. Prepare data\n\n\nPlease download the annotation of our instruction tuning data llava\\_instruct\\_158k.json, and download the COCO train2017 images here.\n\n\n2. Extract projector features from the pretrained model from the feature alignment stage.\n3. Start training!\n\n\nYou may download our pretrained 'URL' here.",
"### Lightning\n\n\n*NOTE: When comparing to LLaVA-Lightning checkpoints in the paper, please use 'LLaVA (Lightning)' instead of 'LLaVA' as they use different set of training data and schedule.*\n\n\nLLaVA-Lightning can be trained on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning. When using spot instances, it costs just ~$40.\n\n\nFor LLaVA Lightning, we create two distilled subset to ensure both a broad concept coverage, and the efficiency in training. Furthermore, we only perform instruction tuning for 1 epoch, in contrast to 3 epochs in the paper.\n\n\nFor pretraining, we create a concept-balanced subset of LAION-CC-SBU. It consists of 558K images. Download data here.\n\n\nFor instruction tuning, we create a subset of LLaVA-Instruct-150K. It consists of 80K image-instruction pairs, consisting of 40K conversation and 40K complex reasoning data, with non-overlapping images. Download 'llava\\_instruct\\_80k.json' here.",
"#### Hyperparameters\n\n\n1. Pretraining\n\n\n\n2. Visual Instruction Tuning",
"#### LLaVA-MPT-7b\n\n\nThanks to LLaVA-Lightning, we are able to train a checkpoint based on MPT-7b-Chat on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning.\n\n\n*NOTE: When comparing to LLaVA-MPT-7B checkpoints in the paper, please use 'LLaVA-MPT-7B (Lightning)' instead of 'LLaVA' as they use different set of base LLM, training data and schedule.*\n\n\nNOTE: This is a research preview of the LLaVA-Lightning based on MPT-7B-chat checkpoint. The usage of the model should comply with MPT-7B-chat license and agreements.\n\n\nNOTE: Unlike other LLaVA models, this model should be used directly without delta weights conversion!\n\n\nNOTE: You need to upgrade to our latest code base to use LLaVA-MPT-7b!\n\n\n1. Usage\n\n\nYou do not need to download our checkpoint, it will directly load from our Hugging Face model: 'liuhaotian/LLaVA-Lightning-MPT-7B-preview'.\n\n\n2. Training\n\n\nWe use the same set of training dataset, and the hyperparameters as other Lightning checkpoints.",
"### ScienceQA\n\n\nNOTE: Due to that ScienceQA experiments were done earlier, the current checkpoints are trained *without* '<im\\_start>' and '<im\\_end>' tokens. Here we provide our training scripts for the current checkpoints.\n\n\n\n1. Pretraining\n\n\n2. Extract projector features\n\n\n3. Finetuning\nYou may download our pretrained 'llava-13b-pretrain-no\\_im\\_start\\_end\\_token.bin' here.\n\n\n\nEvaluation\n----------",
"### GPT-assisted Evaluation\n\n\nOur GPT-assisted evaluation pipeline for multimodal modeling is provided for a comprehensive understanding of the capabilities of vision-language models. Please see our paper for more details.\n\n\n1. Generate LLaVA responses\n2. Evaluate the generated responses. In our case, 'URL' is the response generated by text-only GPT-4 (0314), with the context captions/boxes provided.\n3. Summarize the evaluation results",
"### ScienceQA",
"#### Prepare Data\n\n\n1. Please see ScienceQA repo for setting up the dataset.\n2. Generate ScienceQA dataset for LLaVA conversation-style format.",
"#### Evaluation\n\n\n1. Download our pretrained LLaVA-13B (delta) weights for ScienceQA dataset here. Convert the delta weights to actual weights.\n2. [Option 1] Multiple-GPU inference\nYou may evaluate this with multiple GPUs, and concatenate the generated jsonl files. Please refer to our script for batch evaluation and results gathering.\n3. [Option 2] Single-GPU inference\n\n\n(a) Generate LLaVA responses on ScienceQA dataset\n\n\n(b) Evaluate the generated responses\n\n\nFor reference, we attach our prediction file 'test\\_llava-13b\\_result.json' here for comparison when reproducing our results, as well as for further analysis in detail.\n\n\nIf you find LLaVA useful for your your research and applications, please cite using this BibTeX:\n\n\nAcknowledgement\n---------------\n\n\n* Vicuna: the codebase we built upon, and our base model Vicuna-13B that has the amazing language capabilities!\n\n\nRelated Projects\n----------------\n\n\n* Instruction Tuning with GPT-4\n* LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day\n* Otter: In-Context Multi-Modal Instruction Tuning\n\n\nFor future project ideas, pleae check out:\n\n\n* SEEM: Segment Everything Everywhere All at Once\n* Grounded-Segment-Anything to detect, segment, and generate anything by marrying Grounding DINO and Segment-Anything.](URL via )"
] | [
59,
162,
56,
7,
78,
148,
67,
470,
53,
110,
171,
81,
232,
18,
265,
120,
98,
5,
36,
328
] | [
"TAGS\n#safetensors #arxiv-2304.08485 #arxiv-2306.14895 #arxiv-2306.00890 #arxiv-2305.07895 #endpoints_compatible #region-us \n### Upgrade to latest code base\n\n\nLLaVA Weights\n-------------\n\n\nWe release LLaVA weights as delta weights to comply with the LLaMA model license.\nYou can add our delta to the original LLaMA weights to obtain the LLaVA weights.\n\n\nInstructions:\n\n\n1. Get the original LLaMA weights in the huggingface format by following the instructions here.\n2. Use the following scripts to get LLaVA weights by applying our delta (13b-v0, 7b-v0, lightning-7B-v1-1). It will automatically download delta weights from our Hugging Face account.\n\n\nDemo\n----\n\n\nTo run our demo, you need to prepare LLaVA checkpoints locally. Please follow the instructions here to download the checkpoints.### Gradio Web UI\n\n\nTo launch a Gradio demo locally, please run the following commands one by one. If you plan to launch multiple model workers to compare between different checkpoints, you only need to launch the controller and the web server *ONCE*.#### Launch a controller#### Launch a gradio web server.\n\n\nYou just launched the Gradio web interface. Now, you can open the web interface with the URL printed on the screen. You may notice that there is no model in the model list. Do not worry, as we have not launched any model worker yet. It will be automatically updated when you launch a model worker.#### Launch a model worker\n\n\nThis is the actual *worker* that performs the inference on the GPU. Each worker is responsible for a single model specified in '--model-path'.\n\n\nWait until the process finishes loading the model and you see \"Uvicorn running on ...\". Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.\n\n\nYou can launch as many workers as you want, and compare between different model checkpoints in the same Gradio interface. Please keep the '--controller' the same, and modify the '--port' and '--worker' to a different port number for each worker.#### Launch a model worker (Multiple GPUs, when GPU VRAM <= 24GB)\n\n\nIf your the VRAM of your GPU is less than 24GB (e.g., RTX 3090, RTX 4090, etc.), you may try running it with multiple GPUs.### CLI Inference\n\n\nA starting script for inference with LLaVA without the need of Gradio interface. The current implementation only supports for a single-turn Q-A session, and the interactive CLI is WIP. This also serves as an example for users to build customized inference scripts.\n\n\nExample output (varies in different runs):\n\n\n\n> \n> When visiting this picturesque location with a serene lake and a wooden pier extending over the water, one should be cautious about various safety aspects. Some important considerations include:\n> \n> \n> 1. Ensuring that the pier is structurally sound andstable, as old or weakened pier structures might not support the weight of visitors.\n> 2. Being aware of the water depth around the pier and lake, as sudden drop-offs or strong currents may pose a risk to swimmers, boaters, or those who venture too close to the edge.\n> 3. Staying vigilant about the presence of wildlife in the area, such as slippery, stealthy fish or other animals that might cause harm or inconvenience.\n> 4. Maintaining a safe distance from the water's edge, particularly for children, elderly individuals, or those who are not strong swimmers.\n> 5. Following any posted signs or guidelines related to safety and the use of the pier and surrounding areas.\n> \n> \n> By considering these safety precautions, visitors can enjoy the natural beauty of the location while minimizing risks and ensuring a safe and pleasant experience.\n> \n> \n> \n\n\nTrain\n-----\n\n\nLLaVA training consists of two stages: (1) feature alignment stage: use approximately 600K filtered CC3M to connect a *frozen pretrained* vision encoder to a *frozen LLM*; (2) visual instruction tuning stage: use 150K GPT-generated multimodal instruction-following to teach the model to follow multimodal instructions.\n\n\nLLaVA is trained on 8 A100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the 'per\\_device\\_train\\_batch\\_size' and increase the 'gradient\\_accumulation\\_steps' accordingly. Always keep the global batch size the same: 'per\\_device\\_train\\_batch\\_size' x 'gradient\\_accumulation\\_steps'.### Hyperparameters\n\n\nWe use a similar set of hyperparameters as Vicuna in finetuning. Both hyperparameters used in pretraining and finetuning are provided below.\n\n\n1. Pretraining\n\n\n\n2. Finetuning### Prepare Vicuna checkpoints\n\n\nBefore you start, prepare our base model Vicuna, which is an instruction-tuned chatbot. Please download its weights here.\n\n\nVicuna has two versions: v0 and v1, the main difference between them is the prompt of format. We support both. To ensure the best performance, you need to specify the correct prompt version corresponding to the weights you download: 'v0' for 'v0' weights, and 'v1' for all Vicuna 'v1.x' models.### Pretrain (feature alignment)\n\n\nPlease download the subset of the CC3M dataset we use in the paper here.\n\n\nPretrain takes around 4 hours for LLaVA-13B on 8x A100 (80G). It takes around 2 hours for 7B checkpoints.\n\n\nYou may run this with a single A100 GPU with the following code. Please note that the 'per\\_device\\_train\\_batch\\_size' \\* 'gradient\\_accumulation\\_steps' should be equal to 128 to keep the global batch size the same.\n\n\n\nPretrain: LLaVA-13B, 1x A100 (80G). Time: ~33 hours.\n\n\nPretrain: LLaVA-7B, 1x A100 (80G/40G). Time: ~19 hours.### Visual Instruction Tuning\n\n\n1. Prepare data\n\n\nPlease download the annotation of our instruction tuning data llava\\_instruct\\_158k.json, and download the COCO train2017 images here.\n\n\n2. Extract projector features from the pretrained model from the feature alignment stage.\n3. Start training!\n\n\nYou may download our pretrained 'URL' here.### Lightning\n\n\n*NOTE: When comparing to LLaVA-Lightning checkpoints in the paper, please use 'LLaVA (Lightning)' instead of 'LLaVA' as they use different set of training data and schedule.*\n\n\nLLaVA-Lightning can be trained on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning. When using spot instances, it costs just ~$40.\n\n\nFor LLaVA Lightning, we create two distilled subset to ensure both a broad concept coverage, and the efficiency in training. Furthermore, we only perform instruction tuning for 1 epoch, in contrast to 3 epochs in the paper.\n\n\nFor pretraining, we create a concept-balanced subset of LAION-CC-SBU. It consists of 558K images. Download data here.\n\n\nFor instruction tuning, we create a subset of LLaVA-Instruct-150K. It consists of 80K image-instruction pairs, consisting of 40K conversation and 40K complex reasoning data, with non-overlapping images. Download 'llava\\_instruct\\_80k.json' here.#### Hyperparameters\n\n\n1. Pretraining\n\n\n\n2. Visual Instruction Tuning#### LLaVA-MPT-7b\n\n\nThanks to LLaVA-Lightning, we are able to train a checkpoint based on MPT-7b-Chat on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning.\n\n\n*NOTE: When comparing to LLaVA-MPT-7B checkpoints in the paper, please use 'LLaVA-MPT-7B (Lightning)' instead of 'LLaVA' as they use different set of base LLM, training data and schedule.*\n\n\nNOTE: This is a research preview of the LLaVA-Lightning based on MPT-7B-chat checkpoint. The usage of the model should comply with MPT-7B-chat license and agreements.\n\n\nNOTE: Unlike other LLaVA models, this model should be used directly without delta weights conversion!\n\n\nNOTE: You need to upgrade to our latest code base to use LLaVA-MPT-7b!\n\n\n1. Usage\n\n\nYou do not need to download our checkpoint, it will directly load from our Hugging Face model: 'liuhaotian/LLaVA-Lightning-MPT-7B-preview'.\n\n\n2. Training\n\n\nWe use the same set of training dataset, and the hyperparameters as other Lightning checkpoints.### ScienceQA\n\n\nNOTE: Due to that ScienceQA experiments were done earlier, the current checkpoints are trained *without* '<im\\_start>' and '<im\\_end>' tokens. Here we provide our training scripts for the current checkpoints.\n\n\n\n1. Pretraining\n\n\n2. Extract projector features\n\n\n3. Finetuning\nYou may download our pretrained 'llava-13b-pretrain-no\\_im\\_start\\_end\\_token.bin' here.\n\n\n\nEvaluation\n----------### GPT-assisted Evaluation\n\n\nOur GPT-assisted evaluation pipeline for multimodal modeling is provided for a comprehensive understanding of the capabilities of vision-language models. Please see our paper for more details.\n\n\n1. Generate LLaVA responses\n2. Evaluate the generated responses. In our case, 'URL' is the response generated by text-only GPT-4 (0314), with the context captions/boxes provided.\n3. Summarize the evaluation results### ScienceQA#### Prepare Data\n\n\n1. Please see ScienceQA repo for setting up the dataset.\n2. Generate ScienceQA dataset for LLaVA conversation-style format.#### Evaluation\n\n\n1. Download our pretrained LLaVA-13B (delta) weights for ScienceQA dataset here. Convert the delta weights to actual weights.\n2. [Option 1] Multiple-GPU inference\nYou may evaluate this with multiple GPUs, and concatenate the generated jsonl files. Please refer to our script for batch evaluation and results gathering.\n3. [Option 2] Single-GPU inference\n\n\n(a) Generate LLaVA responses on ScienceQA dataset\n\n\n(b) Evaluate the generated responses\n\n\nFor reference, we attach our prediction file 'test\\_llava-13b\\_result.json' here for comparison when reproducing our results, as well as for further analysis in detail.\n\n\nIf you find LLaVA useful for your your research and applications, please cite using this BibTeX:\n\n\nAcknowledgement\n---------------\n\n\n* Vicuna: the codebase we built upon, and our base model Vicuna-13B that has the amazing language capabilities!\n\n\nRelated Projects\n----------------\n\n\n* Instruction Tuning with GPT-4\n* LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day\n* Otter: In-Context Multi-Modal Instruction Tuning\n\n\nFor future project ideas, pleae check out:\n\n\n* SEEM: Segment Everything Everywhere All at Once\n* Grounded-Segment-Anything to detect, segment, and generate anything by marrying Grounding DINO and Segment-Anything.](URL via )"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | lunarsylph/mooncell_v34 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T01:32:48+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "bert-base-cased", "model-index": [{"name": "bert-finetuned-squad", "results": []}]} | AlexYang33/bert-finetuned-sql | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T01:35:59+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #question-answering #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-finetuned-squad
This model is a fine-tuned version of bert-base-cased on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# bert-finetuned-squad\n\nThis model is a fine-tuned version of bert-base-cased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #question-answering #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-finetuned-squad\n\nThis model is a fine-tuned version of bert-base-cased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
50,
29,
7,
9,
9,
4,
102,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #question-answering #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #endpoints_compatible #region-us \n# bert-finetuned-squad\n\nThis model is a fine-tuned version of bert-base-cased on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP### Training results### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_dir
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1165
- Accuracy: 0.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-base", "model-index": [{"name": "output_dir", "results": []}]} | tralon/test-v4 | null | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T01:36:38+00:00 | [] | [] | TAGS
#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# output_dir
This model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1165
- Accuracy: 0.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# output_dir\n\nThis model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1165\n- Accuracy: 0.9667",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# output_dir\n\nThis model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1165\n- Accuracy: 0.9667",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
57,
55,
7,
9,
9,
4,
117,
5,
44
] | [
"TAGS\n#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# output_dir\n\nThis model is a fine-tuned version of microsoft/deberta-v3-base on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1165\n- Accuracy: 0.9667## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cilantro9246/nr5v2la | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T01:37:26+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Introducing Mermaid-Llama-6.7B-RAG
Powered by 6.7 billion parameters, this model sets the bar for excellence in
AI-driven code comprehension and narrative visualization now with further reduction of hallucinations inspired by https://huggingface.co/jondurbin
who created the "Context-Obedient" chat template. We stand on the shoulders of Giants, so we thank you Jon Durbin the original RAG pioneer for LLM's.
Special Thanks to Eric Hartford for sharing his intuition with me personally on prompt templates, your shared wisdom has helped me innovate my own style that works for my own specialized Mermaid Models.
Beyond turning input into Flow Diagrams this RAG Model excels in Formatted Knowledge Graph utilization in the Mermaid JS Syntax.
See more Mermaid Here : https://www.mermaidchart.com

---
```
Note: I have been informed over this past 2 months that my models are being used in production.
Through insights gathered on how my models are being used effectively in business environments
I have tailored this model to the needs of those that have reached out to me.
So please enjoy, and feedback is always welcome, good or bad. I prefer bad actually.
- Current Issue is lack of compute - I will solve once I get a job / money to train : Context length of 4096 is very limiting for those that want full system diagrams without using aggregation strategies.
```
### Key Features
1. **Code Understanding:**
- Masters Python's intricacies.
- Generates accurate Mermaid Diagram Flow Charts.
- Ideal for developers visualizing code logic.
2. **Storytelling Capabilities:**
- Converts narratives into captivating Mermaid Diagrams.
- Maps character interactions, plot developments, and narrative arcs.
3. **Unmatched Performance:**
- Surpasses GPT-4 in generating well-organized Mermaid Diagrams.
4. **Enhanced Adherence to Context (New):**
- Incorporates contextual prompts to improve adherence and reduce hallucinations.
- Supports the airoboros context-obedient format.
### Collaboration
For collaboration opportunities to enhance Mermaid's capabilities, contact [email protected].
### Use Cases
- **Retrieval-Augmented Generation (RAG):** Creates condensed knowledge graphs to enhance retrieval using vector databases for efficient information retrieval. Combines knowledge graphs and context-aware RAG capabilities for better knowledge condensation.
- **Code Documentation:** Generates automatic visual flow charts from Python code.
- **Storyboarding:** Creates visually appealing diagrams for storytelling.
- **Project Planning:** Generates visual project flow maps for effective team communication.
- **Learning Python:** Assists students in visualizing Python code structures.
- **Game Design:** Visualizes game storylines for coherent narrative structure.
### Dataset Format (New)
To enhance contextual adherence and reduce hallucinations, the dataset follows the format below:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
BEGININSTRUCTION
[insert your instruction(s)]
ENDINSTRUCTION
```
This structure, while verbose, helps models understand specific responses and sources.
### Example
**Prompt:**
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
Blueberries are now green.
ENDINPUT
BEGININSTRUCTION
What color are blueberries? Source?
ENDINSTRUCTION
```
**Expected Response:**
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
### Proof of Concept
A VSCode Extension is forthcoming, providing a live flow map upon pausing for more than 10 seconds.
### Training Specifications
- **LoRA Rank:** 2048
- **LoRA Alpha:** 4096
- **Batch Size:** 1
- **Micro Batch Size:** 1
- **Cutoff Length:** 4096
- **Save every n steps:** 1000
- **Epochs:** 3
- **Learning Rate:** 1e-6
- **LR Scheduler:** Cosine
**Target Modules:**
- Enable q_proj
- Enable v_proj
- Enable k_proj
- Enable o_proj
- Enable gate_proj
- Enable down_proj
- Enable up_proj
---
## Getting Started
Start by downloading one of my models.

Load the model.

Use my prompt template to generate a Mermaid code block, which can be viewed in the Mermaid Live Editor or using the Mermaid CLI tool.

Here we open the VLLM GUI Program while still running in Vram the Mermaid-Llama-8B to compare the flow diagram to the actual program and show the lightweight capabilites of small models on consumer hardware.

## More on my VLLM Class and inference GUI : https://github.com/Troys-Code/VLLM

---
Note: This model should be treated as an Auto-Complete Model, Do not try talking to it in chat you are gonna get garbage, those layers have been pruned and replaced, that is all you will hear of my secret sauce on training on small < 1000 entry datasets.
```
ิ
(โโฟโิ
)
STAY TUNED: THERES MORE TO COME, SOON MERMAID MODELS WILL BE ABLE TO TURN "MERMAID" --> "CODE"
This new dataset is gonna be a game changer for refactoring code blocks if it works.
I am interviewing like crazy so this may take some time as my days have been hectic, imaging studying for finals week every week.
``` | {"license": "cc-by-4.0"} | TroyDoesAI/Mermaid-Llama-6.7B-RAG | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T01:38:43+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Introducing Mermaid-Llama-6.7B-RAG
Powered by 6.7 billion parameters, this model sets the bar for excellence in
AI-driven code comprehension and narrative visualization now with further reduction of hallucinations inspired by URL
who created the "Context-Obedient" chat template. We stand on the shoulders of Giants, so we thank you Jon Durbin the original RAG pioneer for LLM's.
Special Thanks to Eric Hartford for sharing his intuition with me personally on prompt templates, your shared wisdom has helped me innovate my own style that works for my own specialized Mermaid Models.
Beyond turning input into Flow Diagrams this RAG Model excels in Formatted Knowledge Graph utilization in the Mermaid JS Syntax.
See more Mermaid Here : URL
!MermaidLlama GIF
---
### Key Features
1. Code Understanding:
- Masters Python's intricacies.
- Generates accurate Mermaid Diagram Flow Charts.
- Ideal for developers visualizing code logic.
2. Storytelling Capabilities:
- Converts narratives into captivating Mermaid Diagrams.
- Maps character interactions, plot developments, and narrative arcs.
3. Unmatched Performance:
- Surpasses GPT-4 in generating well-organized Mermaid Diagrams.
4. Enhanced Adherence to Context (New):
- Incorporates contextual prompts to improve adherence and reduce hallucinations.
- Supports the airoboros context-obedient format.
### Collaboration
For collaboration opportunities to enhance Mermaid's capabilities, contact troydoesai@URL.
### Use Cases
- Retrieval-Augmented Generation (RAG): Creates condensed knowledge graphs to enhance retrieval using vector databases for efficient information retrieval. Combines knowledge graphs and context-aware RAG capabilities for better knowledge condensation.
- Code Documentation: Generates automatic visual flow charts from Python code.
- Storyboarding: Creates visually appealing diagrams for storytelling.
- Project Planning: Generates visual project flow maps for effective team communication.
- Learning Python: Assists students in visualizing Python code structures.
- Game Design: Visualizes game storylines for coherent narrative structure.
### Dataset Format (New)
To enhance contextual adherence and reduce hallucinations, the dataset follows the format below:
This structure, while verbose, helps models understand specific responses and sources.
### Example
Prompt:
Expected Response:
### Proof of Concept
A VSCode Extension is forthcoming, providing a live flow map upon pausing for more than 10 seconds.
### Training Specifications
- LoRA Rank: 2048
- LoRA Alpha: 4096
- Batch Size: 1
- Micro Batch Size: 1
- Cutoff Length: 4096
- Save every n steps: 1000
- Epochs: 3
- Learning Rate: 1e-6
- LR Scheduler: Cosine
Target Modules:
- Enable q_proj
- Enable v_proj
- Enable k_proj
- Enable o_proj
- Enable gate_proj
- Enable down_proj
- Enable up_proj
---
## Getting Started
Start by downloading one of my models.
!0 TroyDoesAI GIF
Load the model.
!1 Load Model in 4-bit Show Example Use GIF
Use my prompt template to generate a Mermaid code block, which can be viewed in the Mermaid Live Editor or using the Mermaid CLI tool.
!2 Loaded Model in Full Precision 16-bit Show Inference and Mermaid Live Editor GIF
Here we open the VLLM GUI Program while still running in Vram the Mermaid-Llama-8B to compare the flow diagram to the actual program and show the lightweight capabilites of small models on consumer hardware.
!3 Open The Program VLLM Program With Full Precision Mermaid-Llama-8B Running to Evaluate Flow Map GIF
## More on my VLLM Class and inference GUI : URL
!Python RtdBsaz8gy GIF
---
Note: This model should be treated as an Auto-Complete Model, Do not try talking to it in chat you are gonna get garbage, those layers have been pruned and replaced, that is all you will hear of my secret sauce on training on small < 1000 entry datasets.
| [
"# Introducing Mermaid-Llama-6.7B-RAG\n\nPowered by 6.7 billion parameters, this model sets the bar for excellence in \nAI-driven code comprehension and narrative visualization now with further reduction of hallucinations inspired by URL\nwho created the \"Context-Obedient\" chat template. We stand on the shoulders of Giants, so we thank you Jon Durbin the original RAG pioneer for LLM's.\nSpecial Thanks to Eric Hartford for sharing his intuition with me personally on prompt templates, your shared wisdom has helped me innovate my own style that works for my own specialized Mermaid Models.\n\nBeyond turning input into Flow Diagrams this RAG Model excels in Formatted Knowledge Graph utilization in the Mermaid JS Syntax.\n\nSee more Mermaid Here : URL\n\n!MermaidLlama GIF\n\n---",
"### Key Features\n\n1. Code Understanding:\n - Masters Python's intricacies.\n - Generates accurate Mermaid Diagram Flow Charts.\n - Ideal for developers visualizing code logic.\n\n2. Storytelling Capabilities:\n - Converts narratives into captivating Mermaid Diagrams.\n - Maps character interactions, plot developments, and narrative arcs.\n\n3. Unmatched Performance:\n - Surpasses GPT-4 in generating well-organized Mermaid Diagrams.\n\n4. Enhanced Adherence to Context (New):\n - Incorporates contextual prompts to improve adherence and reduce hallucinations.\n - Supports the airoboros context-obedient format.",
"### Collaboration\n\nFor collaboration opportunities to enhance Mermaid's capabilities, contact troydoesai@URL.",
"### Use Cases\n\n- Retrieval-Augmented Generation (RAG): Creates condensed knowledge graphs to enhance retrieval using vector databases for efficient information retrieval. Combines knowledge graphs and context-aware RAG capabilities for better knowledge condensation.\n- Code Documentation: Generates automatic visual flow charts from Python code.\n- Storyboarding: Creates visually appealing diagrams for storytelling.\n- Project Planning: Generates visual project flow maps for effective team communication.\n- Learning Python: Assists students in visualizing Python code structures.\n- Game Design: Visualizes game storylines for coherent narrative structure.",
"### Dataset Format (New)\nTo enhance contextual adherence and reduce hallucinations, the dataset follows the format below:\n\n\n\nThis structure, while verbose, helps models understand specific responses and sources.",
"### Example\n\nPrompt:\n\n\nExpected Response:",
"### Proof of Concept\n\nA VSCode Extension is forthcoming, providing a live flow map upon pausing for more than 10 seconds.",
"### Training Specifications\n\n- LoRA Rank: 2048\n- LoRA Alpha: 4096\n- Batch Size: 1\n- Micro Batch Size: 1\n- Cutoff Length: 4096\n- Save every n steps: 1000\n- Epochs: 3\n- Learning Rate: 1e-6\n- LR Scheduler: Cosine\n\nTarget Modules:\n- Enable q_proj\n- Enable v_proj\n- Enable k_proj\n- Enable o_proj\n- Enable gate_proj\n- Enable down_proj\n- Enable up_proj\n\n---",
"## Getting Started\n\nStart by downloading one of my models.\n\n!0 TroyDoesAI GIF\n\nLoad the model.\n\n!1 Load Model in 4-bit Show Example Use GIF\n\nUse my prompt template to generate a Mermaid code block, which can be viewed in the Mermaid Live Editor or using the Mermaid CLI tool.\n\n!2 Loaded Model in Full Precision 16-bit Show Inference and Mermaid Live Editor GIF\n\nHere we open the VLLM GUI Program while still running in Vram the Mermaid-Llama-8B to compare the flow diagram to the actual program and show the lightweight capabilites of small models on consumer hardware.\n\n!3 Open The Program VLLM Program With Full Precision Mermaid-Llama-8B Running to Evaluate Flow Map GIF",
"## More on my VLLM Class and inference GUI : URL\n\n!Python RtdBsaz8gy GIF\n---\n\nNote: This model should be treated as an Auto-Complete Model, Do not try talking to it in chat you are gonna get garbage, those layers have been pruned and replaced, that is all you will hear of my secret sauce on training on small < 1000 entry datasets."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Introducing Mermaid-Llama-6.7B-RAG\n\nPowered by 6.7 billion parameters, this model sets the bar for excellence in \nAI-driven code comprehension and narrative visualization now with further reduction of hallucinations inspired by URL\nwho created the \"Context-Obedient\" chat template. We stand on the shoulders of Giants, so we thank you Jon Durbin the original RAG pioneer for LLM's.\nSpecial Thanks to Eric Hartford for sharing his intuition with me personally on prompt templates, your shared wisdom has helped me innovate my own style that works for my own specialized Mermaid Models.\n\nBeyond turning input into Flow Diagrams this RAG Model excels in Formatted Knowledge Graph utilization in the Mermaid JS Syntax.\n\nSee more Mermaid Here : URL\n\n!MermaidLlama GIF\n\n---",
"### Key Features\n\n1. Code Understanding:\n - Masters Python's intricacies.\n - Generates accurate Mermaid Diagram Flow Charts.\n - Ideal for developers visualizing code logic.\n\n2. Storytelling Capabilities:\n - Converts narratives into captivating Mermaid Diagrams.\n - Maps character interactions, plot developments, and narrative arcs.\n\n3. Unmatched Performance:\n - Surpasses GPT-4 in generating well-organized Mermaid Diagrams.\n\n4. Enhanced Adherence to Context (New):\n - Incorporates contextual prompts to improve adherence and reduce hallucinations.\n - Supports the airoboros context-obedient format.",
"### Collaboration\n\nFor collaboration opportunities to enhance Mermaid's capabilities, contact troydoesai@URL.",
"### Use Cases\n\n- Retrieval-Augmented Generation (RAG): Creates condensed knowledge graphs to enhance retrieval using vector databases for efficient information retrieval. Combines knowledge graphs and context-aware RAG capabilities for better knowledge condensation.\n- Code Documentation: Generates automatic visual flow charts from Python code.\n- Storyboarding: Creates visually appealing diagrams for storytelling.\n- Project Planning: Generates visual project flow maps for effective team communication.\n- Learning Python: Assists students in visualizing Python code structures.\n- Game Design: Visualizes game storylines for coherent narrative structure.",
"### Dataset Format (New)\nTo enhance contextual adherence and reduce hallucinations, the dataset follows the format below:\n\n\n\nThis structure, while verbose, helps models understand specific responses and sources.",
"### Example\n\nPrompt:\n\n\nExpected Response:",
"### Proof of Concept\n\nA VSCode Extension is forthcoming, providing a live flow map upon pausing for more than 10 seconds.",
"### Training Specifications\n\n- LoRA Rank: 2048\n- LoRA Alpha: 4096\n- Batch Size: 1\n- Micro Batch Size: 1\n- Cutoff Length: 4096\n- Save every n steps: 1000\n- Epochs: 3\n- Learning Rate: 1e-6\n- LR Scheduler: Cosine\n\nTarget Modules:\n- Enable q_proj\n- Enable v_proj\n- Enable k_proj\n- Enable o_proj\n- Enable gate_proj\n- Enable down_proj\n- Enable up_proj\n\n---",
"## Getting Started\n\nStart by downloading one of my models.\n\n!0 TroyDoesAI GIF\n\nLoad the model.\n\n!1 Load Model in 4-bit Show Example Use GIF\n\nUse my prompt template to generate a Mermaid code block, which can be viewed in the Mermaid Live Editor or using the Mermaid CLI tool.\n\n!2 Loaded Model in Full Precision 16-bit Show Inference and Mermaid Live Editor GIF\n\nHere we open the VLLM GUI Program while still running in Vram the Mermaid-Llama-8B to compare the flow diagram to the actual program and show the lightweight capabilites of small models on consumer hardware.\n\n!3 Open The Program VLLM Program With Full Precision Mermaid-Llama-8B Running to Evaluate Flow Map GIF",
"## More on my VLLM Class and inference GUI : URL\n\n!Python RtdBsaz8gy GIF\n---\n\nNote: This model should be treated as an Auto-Complete Model, Do not try talking to it in chat you are gonna get garbage, those layers have been pruned and replaced, that is all you will hear of my secret sauce on training on small < 1000 entry datasets."
] | [
44,
166,
124,
23,
109,
43,
9,
26,
116,
157,
87
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Introducing Mermaid-Llama-6.7B-RAG\n\nPowered by 6.7 billion parameters, this model sets the bar for excellence in \nAI-driven code comprehension and narrative visualization now with further reduction of hallucinations inspired by URL\nwho created the \"Context-Obedient\" chat template. We stand on the shoulders of Giants, so we thank you Jon Durbin the original RAG pioneer for LLM's.\nSpecial Thanks to Eric Hartford for sharing his intuition with me personally on prompt templates, your shared wisdom has helped me innovate my own style that works for my own specialized Mermaid Models.\n\nBeyond turning input into Flow Diagrams this RAG Model excels in Formatted Knowledge Graph utilization in the Mermaid JS Syntax.\n\nSee more Mermaid Here : URL\n\n!MermaidLlama GIF\n\n---### Key Features\n\n1. Code Understanding:\n - Masters Python's intricacies.\n - Generates accurate Mermaid Diagram Flow Charts.\n - Ideal for developers visualizing code logic.\n\n2. Storytelling Capabilities:\n - Converts narratives into captivating Mermaid Diagrams.\n - Maps character interactions, plot developments, and narrative arcs.\n\n3. Unmatched Performance:\n - Surpasses GPT-4 in generating well-organized Mermaid Diagrams.\n\n4. Enhanced Adherence to Context (New):\n - Incorporates contextual prompts to improve adherence and reduce hallucinations.\n - Supports the airoboros context-obedient format.### Collaboration\n\nFor collaboration opportunities to enhance Mermaid's capabilities, contact troydoesai@URL.### Use Cases\n\n- Retrieval-Augmented Generation (RAG): Creates condensed knowledge graphs to enhance retrieval using vector databases for efficient information retrieval. Combines knowledge graphs and context-aware RAG capabilities for better knowledge condensation.\n- Code Documentation: Generates automatic visual flow charts from Python code.\n- Storyboarding: Creates visually appealing diagrams for storytelling.\n- Project Planning: Generates visual project flow maps for effective team communication.\n- Learning Python: Assists students in visualizing Python code structures.\n- Game Design: Visualizes game storylines for coherent narrative structure.### Dataset Format (New)\nTo enhance contextual adherence and reduce hallucinations, the dataset follows the format below:\n\n\n\nThis structure, while verbose, helps models understand specific responses and sources.### Example\n\nPrompt:\n\n\nExpected Response:### Proof of Concept\n\nA VSCode Extension is forthcoming, providing a live flow map upon pausing for more than 10 seconds.### Training Specifications\n\n- LoRA Rank: 2048\n- LoRA Alpha: 4096\n- Batch Size: 1\n- Micro Batch Size: 1\n- Cutoff Length: 4096\n- Save every n steps: 1000\n- Epochs: 3\n- Learning Rate: 1e-6\n- LR Scheduler: Cosine\n\nTarget Modules:\n- Enable q_proj\n- Enable v_proj\n- Enable k_proj\n- Enable o_proj\n- Enable gate_proj\n- Enable down_proj\n- Enable up_proj\n\n---## Getting Started\n\nStart by downloading one of my models.\n\n!0 TroyDoesAI GIF\n\nLoad the model.\n\n!1 Load Model in 4-bit Show Example Use GIF\n\nUse my prompt template to generate a Mermaid code block, which can be viewed in the Mermaid Live Editor or using the Mermaid CLI tool.\n\n!2 Loaded Model in Full Precision 16-bit Show Inference and Mermaid Live Editor GIF\n\nHere we open the VLLM GUI Program while still running in Vram the Mermaid-Llama-8B to compare the flow diagram to the actual program and show the lightweight capabilites of small models on consumer hardware.\n\n!3 Open The Program VLLM Program With Full Precision Mermaid-Llama-8B Running to Evaluate Flow Map GIF## More on my VLLM Class and inference GUI : URL\n\n!Python RtdBsaz8gy GIF\n---\n\nNote: This model should be treated as an Auto-Complete Model, Do not try talking to it in chat you are gonna get garbage, those layers have been pruned and replaced, that is all you will hear of my secret sauce on training on small < 1000 entry datasets."
] |
text-generation | transformers | <a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a>
# Llama-3 8B Gradient Instruct 1048k
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected].
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.

**Approach:**
- [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
- Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below)
**Infra:**
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
**Data:**
For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
**Progressive Training Details:**
| | 65K | 262K | 524k | 1048k |
|------------------------|-----------|-----------|-----------|-----------|
| Initialize From | LLaMA-3 8B| 65K | 262K | 524k |
| Sequence Length 2^N | 16 | 18 | 19 | 20 |
| RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B |
| Batch Size | 1 | 1 | 16 | 16 |
| Gradient Accumulation Steps | 32 | 16 | 1 | 1 |
| Steps | 30 | 24 | 50 | 50 |
| Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 |
| Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
| # GPUs | 8 | 32 | 512 | 512 |
| GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
| Minutes to Train (Wall)| 202 | 555 | 61 | 87 |
**Quants**:
- [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF)
- [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit)
## The Gradient AI Team
https://gradient.ai/
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
## Contact Us
Drop an email to [[email protected]](mailto:[email protected])
## References
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] https://github.com/jzhang38/EasyContext
----
# Base Model
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos | {"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"} | blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw3.7-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T01:39:01+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| [<img src="URL width="200"/>](URL)
Llama-3 8B Gradient Instruct 1048k
==================================
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@URL.
For more info see our End-to-end development service for custom LLMs and AI systems
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from Crusoe Energy. It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
!image/png
Approach:
* meta-llama/Meta-Llama-3-8B-Instruct as the base
* NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
* Progressive training on increasing context lengths, similar to Large World Model [2] (See details below)
Infra:
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on Crusoe Energy high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
Data:
For training data, we generate long contexts by augmenting SlimPajama.
Progressive Training Details:
Quants:
* GGUF
* MLX-4bit
The Gradient AI Team
--------------------
URL
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
Contact Us
----------
Drop an email to contact@URL
References
----------
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] URL
---
Base Model
==========
Model Details
-------------
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date April 18, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: URL
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
Intended Use
------------
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
How to use
----------
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original 'llama3' codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.
#### Transformers pipeline
#### Transformers AutoModelForCausalLM
### Use with 'llama3'
Please, follow the instructions in the repository
To download Original checkpoints, see the example command below leveraging 'huggingface-cli':
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
Hardware and Software
---------------------
Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.
CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
-------------
Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
Benchmarks
----------
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.
### Base pretrained models
### Instruction tuned models
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
Safety
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
Refusals
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL
#### Critical risks
CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### Cyber Security
We have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.
### Child Safety
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.
Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.
Ethical Considerations and Limitations
--------------------------------------
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at URL
instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {URL
}
Contributors
------------
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
| [
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
52,
42,
6,
13,
429,
8,
6,
270,
280,
72,
115,
118,
126,
2136
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.#### Transformers pipeline#### Transformers AutoModelForCausalLM### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.### Base pretrained models### Instruction tuned models### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/gi2xkq1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T01:39:51+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "272.42 +/- 14.54", "name": "mean_reward", "verified": false}]}]}]} | williamchenaeo/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-30T01:42:40+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
31,
35,
17
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA10
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7784 | 0.09 | 10 | 0.1810 |
| 0.1728 | 0.18 | 20 | 0.1533 |
| 0.1513 | 0.27 | 30 | 0.1702 |
| 0.1572 | 0.36 | 40 | 0.1529 |
| 0.151 | 0.45 | 50 | 0.1538 |
| 0.1533 | 0.54 | 60 | 0.1488 |
| 0.1495 | 0.63 | 70 | 0.1482 |
| 0.1488 | 0.73 | 80 | 0.1502 |
| 0.146 | 0.82 | 90 | 0.1498 |
| 0.1484 | 0.91 | 100 | 0.1495 |
| 0.15 | 1.0 | 110 | 0.1495 |
| 0.1436 | 1.09 | 120 | 0.1566 |
| 0.1355 | 1.18 | 130 | 0.1160 |
| 0.9465 | 1.27 | 140 | 7.4671 |
| 5.6519 | 1.36 | 150 | 3.3499 |
| 2.457 | 1.45 | 160 | 1.5871 |
| 1.842 | 1.54 | 170 | 0.8602 |
| 0.8488 | 1.63 | 180 | 0.5624 |
| 0.5347 | 1.72 | 190 | 0.4821 |
| 0.4016 | 1.81 | 200 | 0.3878 |
| 0.3025 | 1.9 | 210 | 0.2388 |
| 0.2251 | 1.99 | 220 | 0.2074 |
| 0.2096 | 2.08 | 230 | 0.2346 |
| 0.2117 | 2.18 | 240 | 0.1941 |
| 0.1817 | 2.27 | 250 | 0.1716 |
| 0.1629 | 2.36 | 260 | 0.1627 |
| 0.1533 | 2.45 | 270 | 0.1571 |
| 0.1503 | 2.54 | 280 | 0.1522 |
| 0.1453 | 2.63 | 290 | 0.1509 |
| 0.146 | 2.72 | 300 | 0.1492 |
| 0.1475 | 2.81 | 310 | 0.1459 |
| 0.1425 | 2.9 | 320 | 0.1465 |
| 0.1414 | 2.99 | 330 | 0.1456 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA10", "results": []}]} | Litzy619/O0428HMA10 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T01:44:13+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA10
==========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1456
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
35,
160,
5,
47
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - yuffish/kettle-segmented
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "inference": true, "base_model": "stabilityai/stable-diffusion-2-1-base", "instance_prompt": "a photo of sks object"} | yuffish/kettle-segmented | null | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-30T01:44:38+00:00 | [] | [] | TAGS
#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# DreamBooth - yuffish/kettle-segmented
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using DreamBooth.
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# DreamBooth - yuffish/kettle-segmented\n\nThis is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# DreamBooth - yuffish/kettle-segmented\n\nThis is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
83,
75,
6,
7,
23,
17
] | [
"TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n# DreamBooth - yuffish/kettle-segmented\n\nThis is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.## Intended uses & limitations#### How to use#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-generation | transformers |
pretrained speculative draft model. based on llama3 tokenizer. trained < 4B tokens. | {"language": ["en"], "license": "apache-2.0"} | maywell/l3-211m | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T01:47:22+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
pretrained speculative draft model. based on llama3 tokenizer. trained < 4B tokens. | [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
44
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NTTU-digital-TA-gemma
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.0+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "gemma", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "google/gemma-2b-it", "model-index": [{"name": "NTTU-digital-TA-gemma", "results": []}]} | NTTUNLPTEAM/NTTU-digital-TA-gemma | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:google/gemma-2b-it",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T01:47:52+00:00 | [] | [] | TAGS
#transformers #safetensors #gemma #text-generation #trl #sft #generated_from_trainer #conversational #base_model-google/gemma-2b-it #license-gemma #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# NTTU-digital-TA-gemma
This model is a fine-tuned version of google/gemma-2b-it on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.0+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# NTTU-digital-TA-gemma\n\nThis model is a fine-tuned version of google/gemma-2b-it on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.0+cu118\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #trl #sft #generated_from_trainer #conversational #base_model-google/gemma-2b-it #license-gemma #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# NTTU-digital-TA-gemma\n\nThis model is a fine-tuned version of google/gemma-2b-it on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.0+cu118\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
65,
32,
7,
9,
9,
4,
113,
5,
44
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #trl #sft #generated_from_trainer #conversational #base_model-google/gemma-2b-it #license-gemma #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# NTTU-digital-TA-gemma\n\nThis model is a fine-tuned version of google/gemma-2b-it on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.0+cu118\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation | transformers | # Untitled Model (1)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf)
* [EleutherAI/llemma_7b](https://huggingface.co/EleutherAI/llemma_7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: codellama/CodeLlama-7b-hf
parameters:
weight: 0.5
- model: EleutherAI/llemma_7b
parameters:
weight: 0.5
merge_method: linear
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["codellama/CodeLlama-7b-hf", "EleutherAI/llemma_7b"]} | JyoP/merged_llemma_codeLlama | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2203.05482",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:EleutherAI/llemma_7b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T01:50:15+00:00 | [
"2203.05482"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2203.05482 #base_model-codellama/CodeLlama-7b-hf #base_model-EleutherAI/llemma_7b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Untitled Model (1)
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the linear merge method.
### Models Merged
The following models were included in the merge:
* codellama/CodeLlama-7b-hf
* EleutherAI/llemma_7b
### Configuration
The following YAML configuration was used to produce this model:
| [
"# Untitled Model (1)\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the linear merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* codellama/CodeLlama-7b-hf\n* EleutherAI/llemma_7b",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2203.05482 #base_model-codellama/CodeLlama-7b-hf #base_model-EleutherAI/llemma_7b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Untitled Model (1)\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the linear merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* codellama/CodeLlama-7b-hf\n* EleutherAI/llemma_7b",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
84,
21,
4,
15,
40,
16
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2203.05482 #base_model-codellama/CodeLlama-7b-hf #base_model-EleutherAI/llemma_7b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Untitled Model (1)\n\nThis is a merge of pre-trained language models created using mergekit.## Merge Details### Merge Method\n\nThis model was merged using the linear merge method.### Models Merged\n\nThe following models were included in the merge:\n* codellama/CodeLlama-7b-hf\n* EleutherAI/llemma_7b### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "t5-base"} | PQlet/T5base-lora-sumarizationTables-v2-MLM-lambda0.1 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:t5-base",
"region:us"
] | null | 2024-04-30T01:51:32+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-t5-base #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-t5-base #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
31,
6,
4,
50,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5,
13
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-t5-base #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact### Framework versions\n\n- PEFT 0.10.0"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2977
- F1 Score: 0.8871
- Accuracy: 0.8871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4231 | 2.13 | 200 | 0.4139 | 0.8159 | 0.8183 |
| 0.344 | 4.26 | 400 | 0.3559 | 0.8449 | 0.8450 |
| 0.321 | 6.38 | 600 | 0.3584 | 0.8530 | 0.8530 |
| 0.306 | 8.51 | 800 | 0.3437 | 0.8548 | 0.8550 |
| 0.292 | 10.64 | 1000 | 0.3478 | 0.8510 | 0.8510 |
| 0.2772 | 12.77 | 1200 | 0.3449 | 0.8597 | 0.8597 |
| 0.2726 | 14.89 | 1400 | 0.3547 | 0.8533 | 0.8537 |
| 0.2607 | 17.02 | 1600 | 0.3273 | 0.8704 | 0.8704 |
| 0.2592 | 19.15 | 1800 | 0.3434 | 0.8536 | 0.8537 |
| 0.2537 | 21.28 | 2000 | 0.3457 | 0.8615 | 0.8617 |
| 0.2524 | 23.4 | 2200 | 0.3281 | 0.8683 | 0.8684 |
| 0.241 | 25.53 | 2400 | 0.3780 | 0.8463 | 0.8464 |
| 0.2465 | 27.66 | 2600 | 0.3381 | 0.8608 | 0.8611 |
| 0.2397 | 29.79 | 2800 | 0.3359 | 0.8682 | 0.8684 |
| 0.2367 | 31.91 | 3000 | 0.3365 | 0.8696 | 0.8697 |
| 0.2323 | 34.04 | 3200 | 0.3274 | 0.8743 | 0.8744 |
| 0.2315 | 36.17 | 3400 | 0.3487 | 0.8635 | 0.8637 |
| 0.228 | 38.3 | 3600 | 0.3534 | 0.8635 | 0.8637 |
| 0.2271 | 40.43 | 3800 | 0.3564 | 0.8640 | 0.8644 |
| 0.2244 | 42.55 | 4000 | 0.3537 | 0.8608 | 0.8611 |
| 0.221 | 44.68 | 4200 | 0.3461 | 0.8676 | 0.8677 |
| 0.2205 | 46.81 | 4400 | 0.3504 | 0.8615 | 0.8617 |
| 0.2163 | 48.94 | 4600 | 0.3609 | 0.8586 | 0.8591 |
| 0.217 | 51.06 | 4800 | 0.3217 | 0.8784 | 0.8784 |
| 0.2146 | 53.19 | 5000 | 0.3550 | 0.8640 | 0.8644 |
| 0.2155 | 55.32 | 5200 | 0.3291 | 0.8730 | 0.8731 |
| 0.2103 | 57.45 | 5400 | 0.3674 | 0.8662 | 0.8664 |
| 0.2057 | 59.57 | 5600 | 0.3479 | 0.8744 | 0.8744 |
| 0.2108 | 61.7 | 5800 | 0.3268 | 0.8744 | 0.8744 |
| 0.2054 | 63.83 | 6000 | 0.3677 | 0.8674 | 0.8677 |
| 0.2057 | 65.96 | 6200 | 0.3632 | 0.8668 | 0.8671 |
| 0.2051 | 68.09 | 6400 | 0.3511 | 0.8722 | 0.8724 |
| 0.2032 | 70.21 | 6600 | 0.3648 | 0.8688 | 0.8691 |
| 0.2031 | 72.34 | 6800 | 0.3417 | 0.8730 | 0.8731 |
| 0.1995 | 74.47 | 7000 | 0.3788 | 0.8626 | 0.8631 |
| 0.195 | 76.6 | 7200 | 0.3478 | 0.8743 | 0.8744 |
| 0.2002 | 78.72 | 7400 | 0.3553 | 0.8723 | 0.8724 |
| 0.1986 | 80.85 | 7600 | 0.3591 | 0.8710 | 0.8711 |
| 0.1954 | 82.98 | 7800 | 0.3469 | 0.8757 | 0.8758 |
| 0.1976 | 85.11 | 8000 | 0.3576 | 0.8716 | 0.8717 |
| 0.1959 | 87.23 | 8200 | 0.3583 | 0.8723 | 0.8724 |
| 0.1972 | 89.36 | 8400 | 0.3552 | 0.8763 | 0.8764 |
| 0.1954 | 91.49 | 8600 | 0.3648 | 0.8702 | 0.8704 |
| 0.1937 | 93.62 | 8800 | 0.3511 | 0.8730 | 0.8731 |
| 0.1933 | 95.74 | 9000 | 0.3704 | 0.8662 | 0.8664 |
| 0.1914 | 97.87 | 9200 | 0.3564 | 0.8729 | 0.8731 |
| 0.195 | 100.0 | 9400 | 0.3591 | 0.8723 | 0.8724 |
| 0.1923 | 102.13 | 9600 | 0.3608 | 0.8723 | 0.8724 |
| 0.1919 | 104.26 | 9800 | 0.3586 | 0.8730 | 0.8731 |
| 0.1924 | 106.38 | 10000 | 0.3575 | 0.8736 | 0.8737 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:52:49+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3-seqsight\_16384\_512\_56M-L1\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2977
* F1 Score: 0.8871
* Accuracy: 0.8871
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3045
- F1 Score: 0.8824
- Accuracy: 0.8824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4046 | 2.13 | 200 | 0.3698 | 0.8461 | 0.8464 |
| 0.3108 | 4.26 | 400 | 0.3428 | 0.8563 | 0.8564 |
| 0.2721 | 6.38 | 600 | 0.3552 | 0.8549 | 0.8550 |
| 0.2588 | 8.51 | 800 | 0.3115 | 0.8724 | 0.8724 |
| 0.2456 | 10.64 | 1000 | 0.3570 | 0.8559 | 0.8564 |
| 0.2343 | 12.77 | 1200 | 0.3222 | 0.8771 | 0.8771 |
| 0.2271 | 14.89 | 1400 | 0.3434 | 0.8655 | 0.8657 |
| 0.2169 | 17.02 | 1600 | 0.3267 | 0.8831 | 0.8831 |
| 0.2137 | 19.15 | 1800 | 0.3258 | 0.8778 | 0.8778 |
| 0.2015 | 21.28 | 2000 | 0.3579 | 0.8688 | 0.8691 |
| 0.2021 | 23.4 | 2200 | 0.3488 | 0.8769 | 0.8771 |
| 0.1873 | 25.53 | 2400 | 0.3769 | 0.8715 | 0.8717 |
| 0.1908 | 27.66 | 2600 | 0.3619 | 0.8674 | 0.8677 |
| 0.1793 | 29.79 | 2800 | 0.3864 | 0.8706 | 0.8711 |
| 0.1767 | 31.91 | 3000 | 0.3573 | 0.8797 | 0.8798 |
| 0.171 | 34.04 | 3200 | 0.3449 | 0.8811 | 0.8811 |
| 0.1678 | 36.17 | 3400 | 0.4275 | 0.8617 | 0.8624 |
| 0.1595 | 38.3 | 3600 | 0.4030 | 0.8701 | 0.8704 |
| 0.1558 | 40.43 | 3800 | 0.4725 | 0.8547 | 0.8557 |
| 0.1512 | 42.55 | 4000 | 0.4683 | 0.8578 | 0.8584 |
| 0.1473 | 44.68 | 4200 | 0.4366 | 0.8620 | 0.8624 |
| 0.1421 | 46.81 | 4400 | 0.4197 | 0.8708 | 0.8711 |
| 0.1394 | 48.94 | 4600 | 0.4501 | 0.8598 | 0.8604 |
| 0.1374 | 51.06 | 4800 | 0.4113 | 0.8749 | 0.8751 |
| 0.1323 | 53.19 | 5000 | 0.4698 | 0.8654 | 0.8657 |
| 0.1287 | 55.32 | 5200 | 0.4620 | 0.8648 | 0.8651 |
| 0.1272 | 57.45 | 5400 | 0.5108 | 0.8611 | 0.8617 |
| 0.119 | 59.57 | 5600 | 0.5212 | 0.8606 | 0.8611 |
| 0.1202 | 61.7 | 5800 | 0.4716 | 0.8694 | 0.8697 |
| 0.1156 | 63.83 | 6000 | 0.5120 | 0.8605 | 0.8611 |
| 0.1118 | 65.96 | 6200 | 0.5179 | 0.8619 | 0.8624 |
| 0.1127 | 68.09 | 6400 | 0.5186 | 0.8571 | 0.8577 |
| 0.1044 | 70.21 | 6600 | 0.6003 | 0.8523 | 0.8530 |
| 0.1059 | 72.34 | 6800 | 0.5264 | 0.8626 | 0.8631 |
| 0.1045 | 74.47 | 7000 | 0.5904 | 0.8529 | 0.8537 |
| 0.0996 | 76.6 | 7200 | 0.5376 | 0.8660 | 0.8664 |
| 0.0991 | 78.72 | 7400 | 0.5570 | 0.8646 | 0.8651 |
| 0.0966 | 80.85 | 7600 | 0.5589 | 0.8646 | 0.8651 |
| 0.0975 | 82.98 | 7800 | 0.5842 | 0.8619 | 0.8624 |
| 0.0927 | 85.11 | 8000 | 0.6082 | 0.8584 | 0.8591 |
| 0.0912 | 87.23 | 8200 | 0.6212 | 0.8598 | 0.8604 |
| 0.0952 | 89.36 | 8400 | 0.6192 | 0.8543 | 0.8550 |
| 0.09 | 91.49 | 8600 | 0.6004 | 0.8598 | 0.8604 |
| 0.0891 | 93.62 | 8800 | 0.6050 | 0.8626 | 0.8631 |
| 0.0882 | 95.74 | 9000 | 0.6315 | 0.8584 | 0.8591 |
| 0.0857 | 97.87 | 9200 | 0.6263 | 0.8578 | 0.8584 |
| 0.0872 | 100.0 | 9400 | 0.6448 | 0.8550 | 0.8557 |
| 0.0849 | 102.13 | 9600 | 0.6521 | 0.8543 | 0.8550 |
| 0.0834 | 104.26 | 9800 | 0.6395 | 0.8577 | 0.8584 |
| 0.0853 | 106.38 | 10000 | 0.6370 | 0.8570 | 0.8577 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:53:25+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3-seqsight\_16384\_512\_56M-L8\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3045
* F1 Score: 0.8824
* Accuracy: 0.8824
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2646
- F1 Score: 0.8951
- Accuracy: 0.8951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3885 | 2.13 | 200 | 0.3562 | 0.8477 | 0.8477 |
| 0.2813 | 4.26 | 400 | 0.3305 | 0.8675 | 0.8677 |
| 0.2493 | 6.38 | 600 | 0.3649 | 0.8522 | 0.8524 |
| 0.2349 | 8.51 | 800 | 0.3031 | 0.8838 | 0.8838 |
| 0.2193 | 10.64 | 1000 | 0.3812 | 0.8577 | 0.8584 |
| 0.2032 | 12.77 | 1200 | 0.3416 | 0.8764 | 0.8764 |
| 0.1925 | 14.89 | 1400 | 0.3750 | 0.8708 | 0.8711 |
| 0.1779 | 17.02 | 1600 | 0.3903 | 0.8597 | 0.8597 |
| 0.1674 | 19.15 | 1800 | 0.3564 | 0.8724 | 0.8724 |
| 0.1489 | 21.28 | 2000 | 0.4619 | 0.8612 | 0.8617 |
| 0.1423 | 23.4 | 2200 | 0.4485 | 0.8735 | 0.8737 |
| 0.1215 | 25.53 | 2400 | 0.4759 | 0.8784 | 0.8784 |
| 0.1185 | 27.66 | 2600 | 0.5499 | 0.8436 | 0.8444 |
| 0.0993 | 29.79 | 2800 | 0.5338 | 0.8520 | 0.8524 |
| 0.0962 | 31.91 | 3000 | 0.5457 | 0.8514 | 0.8517 |
| 0.0823 | 34.04 | 3200 | 0.5406 | 0.8577 | 0.8577 |
| 0.0787 | 36.17 | 3400 | 0.6370 | 0.8559 | 0.8564 |
| 0.0708 | 38.3 | 3600 | 0.6247 | 0.8574 | 0.8577 |
| 0.0674 | 40.43 | 3800 | 0.6834 | 0.8478 | 0.8484 |
| 0.057 | 42.55 | 4000 | 0.8145 | 0.8462 | 0.8470 |
| 0.0536 | 44.68 | 4200 | 0.7901 | 0.8400 | 0.8410 |
| 0.0505 | 46.81 | 4400 | 0.7505 | 0.8659 | 0.8664 |
| 0.0463 | 48.94 | 4600 | 0.7752 | 0.8490 | 0.8497 |
| 0.0449 | 51.06 | 4800 | 0.7215 | 0.8601 | 0.8604 |
| 0.0384 | 53.19 | 5000 | 0.8821 | 0.8376 | 0.8383 |
| 0.0351 | 55.32 | 5200 | 0.9139 | 0.8465 | 0.8470 |
| 0.0349 | 57.45 | 5400 | 0.9360 | 0.8387 | 0.8397 |
| 0.0361 | 59.57 | 5600 | 0.8710 | 0.8575 | 0.8577 |
| 0.0308 | 61.7 | 5800 | 0.8229 | 0.8597 | 0.8597 |
| 0.0294 | 63.83 | 6000 | 0.9199 | 0.8517 | 0.8524 |
| 0.0293 | 65.96 | 6200 | 0.8718 | 0.8588 | 0.8591 |
| 0.0271 | 68.09 | 6400 | 0.8787 | 0.8617 | 0.8617 |
| 0.0238 | 70.21 | 6600 | 0.9513 | 0.8581 | 0.8584 |
| 0.0241 | 72.34 | 6800 | 0.9352 | 0.8629 | 0.8631 |
| 0.0225 | 74.47 | 7000 | 0.9943 | 0.8548 | 0.8550 |
| 0.0231 | 76.6 | 7200 | 0.9241 | 0.8602 | 0.8604 |
| 0.0204 | 78.72 | 7400 | 1.0017 | 0.8622 | 0.8624 |
| 0.0206 | 80.85 | 7600 | 1.0763 | 0.8498 | 0.8504 |
| 0.0182 | 82.98 | 7800 | 1.0418 | 0.8575 | 0.8577 |
| 0.0166 | 85.11 | 8000 | 1.0393 | 0.8567 | 0.8570 |
| 0.0172 | 87.23 | 8200 | 1.0861 | 0.8492 | 0.8497 |
| 0.0167 | 89.36 | 8400 | 1.1617 | 0.8470 | 0.8477 |
| 0.015 | 91.49 | 8600 | 1.0801 | 0.8621 | 0.8624 |
| 0.0151 | 93.62 | 8800 | 1.1022 | 0.8541 | 0.8544 |
| 0.014 | 95.74 | 9000 | 1.1847 | 0.8438 | 0.8444 |
| 0.0125 | 97.87 | 9200 | 1.1438 | 0.8534 | 0.8537 |
| 0.0131 | 100.0 | 9400 | 1.1487 | 0.8554 | 0.8557 |
| 0.0121 | 102.13 | 9600 | 1.1538 | 0.8533 | 0.8537 |
| 0.0124 | 104.26 | 9800 | 1.1753 | 0.8513 | 0.8517 |
| 0.0121 | 106.38 | 10000 | 1.1525 | 0.8501 | 0.8504 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:53:32+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3-seqsight\_16384\_512\_56M-L32\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2646
* F1 Score: 0.8951
* Accuracy: 0.8951
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers | <a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a>
# Llama-3 8B Gradient Instruct 1048k
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected].
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.

**Approach:**
- [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
- Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below)
**Infra:**
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
**Data:**
For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
**Progressive Training Details:**
| | 65K | 262K | 524k | 1048k |
|------------------------|-----------|-----------|-----------|-----------|
| Initialize From | LLaMA-3 8B| 65K | 262K | 524k |
| Sequence Length 2^N | 16 | 18 | 19 | 20 |
| RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B |
| Batch Size | 1 | 1 | 16 | 16 |
| Gradient Accumulation Steps | 32 | 16 | 1 | 1 |
| Steps | 30 | 24 | 50 | 50 |
| Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 |
| Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
| # GPUs | 8 | 32 | 512 | 512 |
| GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
| Minutes to Train (Wall)| 202 | 555 | 61 | 87 |
**Quants**:
- [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF)
- [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit)
## The Gradient AI Team
https://gradient.ai/
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
## Contact Us
Drop an email to [[email protected]](mailto:[email protected])
## References
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] https://github.com/jzhang38/EasyContext
----
# Base Model
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos | {"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"} | blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw4-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-30T01:53:50+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| [<img src="URL width="200"/>](URL)
Llama-3 8B Gradient Instruct 1048k
==================================
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@URL.
For more info see our End-to-end development service for custom LLMs and AI systems
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from Crusoe Energy. It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
!image/png
Approach:
* meta-llama/Meta-Llama-3-8B-Instruct as the base
* NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
* Progressive training on increasing context lengths, similar to Large World Model [2] (See details below)
Infra:
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on Crusoe Energy high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
Data:
For training data, we generate long contexts by augmenting SlimPajama.
Progressive Training Details:
Quants:
* GGUF
* MLX-4bit
The Gradient AI Team
--------------------
URL
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
Contact Us
----------
Drop an email to contact@URL
References
----------
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] URL
---
Base Model
==========
Model Details
-------------
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date April 18, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: URL
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
Intended Use
------------
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
How to use
----------
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original 'llama3' codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.
#### Transformers pipeline
#### Transformers AutoModelForCausalLM
### Use with 'llama3'
Please, follow the instructions in the repository
To download Original checkpoints, see the example command below leveraging 'huggingface-cli':
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
Hardware and Software
---------------------
Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.
CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
-------------
Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
Benchmarks
----------
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.
### Base pretrained models
### Instruction tuned models
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
Safety
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
Refusals
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL
#### Critical risks
CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### Cyber Security
We have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.
### Child Safety
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.
Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.
Ethical Considerations and Limitations
--------------------------------------
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at URL
instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {URL
}
Contributors
------------
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
| [
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
56,
42,
6,
13,
429,
8,
6,
270,
280,
72,
115,
118,
126,
2136
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.#### Transformers pipeline#### Transformers AutoModelForCausalLM### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.### Base pretrained models### Instruction tuned models### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5400
- F1 Score: 0.7389
- Accuracy: 0.7387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6113 | 0.93 | 200 | 0.5680 | 0.7095 | 0.7100 |
| 0.572 | 1.87 | 400 | 0.5594 | 0.7164 | 0.7164 |
| 0.5552 | 2.8 | 600 | 0.5536 | 0.7272 | 0.7273 |
| 0.5502 | 3.74 | 800 | 0.5479 | 0.7287 | 0.7284 |
| 0.5447 | 4.67 | 1000 | 0.5498 | 0.7288 | 0.7287 |
| 0.5353 | 5.61 | 1200 | 0.5618 | 0.7185 | 0.7205 |
| 0.5363 | 6.54 | 1400 | 0.5655 | 0.7144 | 0.7170 |
| 0.5237 | 7.48 | 1600 | 0.5516 | 0.7353 | 0.7355 |
| 0.533 | 8.41 | 1800 | 0.5478 | 0.7296 | 0.7299 |
| 0.5298 | 9.35 | 2000 | 0.5565 | 0.7226 | 0.7238 |
| 0.5184 | 10.28 | 2200 | 0.5374 | 0.7390 | 0.7387 |
| 0.5243 | 11.21 | 2400 | 0.5541 | 0.7308 | 0.7317 |
| 0.5154 | 12.15 | 2600 | 0.5691 | 0.7251 | 0.7270 |
| 0.5176 | 13.08 | 2800 | 0.5562 | 0.7323 | 0.7331 |
| 0.519 | 14.02 | 3000 | 0.5338 | 0.7395 | 0.7393 |
| 0.5141 | 14.95 | 3200 | 0.5441 | 0.7395 | 0.7396 |
| 0.511 | 15.89 | 3400 | 0.5451 | 0.7396 | 0.7399 |
| 0.5109 | 16.82 | 3600 | 0.5474 | 0.7370 | 0.7375 |
| 0.5124 | 17.76 | 3800 | 0.5658 | 0.7261 | 0.7282 |
| 0.51 | 18.69 | 4000 | 0.5441 | 0.7386 | 0.7387 |
| 0.5065 | 19.63 | 4200 | 0.5371 | 0.7436 | 0.7437 |
| 0.5079 | 20.56 | 4400 | 0.5356 | 0.7442 | 0.7443 |
| 0.5038 | 21.5 | 4600 | 0.5512 | 0.7350 | 0.7361 |
| 0.5053 | 22.43 | 4800 | 0.5326 | 0.7442 | 0.7440 |
| 0.5014 | 23.36 | 5000 | 0.5475 | 0.7416 | 0.7422 |
| 0.5036 | 24.3 | 5200 | 0.5289 | 0.7474 | 0.7472 |
| 0.503 | 25.23 | 5400 | 0.5268 | 0.7440 | 0.7437 |
| 0.503 | 26.17 | 5600 | 0.5320 | 0.7409 | 0.7408 |
| 0.5008 | 27.1 | 5800 | 0.5317 | 0.7413 | 0.7411 |
| 0.4931 | 28.04 | 6000 | 0.5367 | 0.7431 | 0.7428 |
| 0.501 | 28.97 | 6200 | 0.5425 | 0.7423 | 0.7425 |
| 0.4986 | 29.91 | 6400 | 0.5394 | 0.7416 | 0.7416 |
| 0.4991 | 30.84 | 6600 | 0.5435 | 0.7396 | 0.7402 |
| 0.4947 | 31.78 | 6800 | 0.5304 | 0.7430 | 0.7428 |
| 0.4952 | 32.71 | 7000 | 0.5355 | 0.7411 | 0.7411 |
| 0.492 | 33.64 | 7200 | 0.5465 | 0.7395 | 0.7402 |
| 0.4942 | 34.58 | 7400 | 0.5327 | 0.7427 | 0.7425 |
| 0.4941 | 35.51 | 7600 | 0.5377 | 0.7401 | 0.7402 |
| 0.4893 | 36.45 | 7800 | 0.5352 | 0.7436 | 0.7434 |
| 0.4958 | 37.38 | 8000 | 0.5437 | 0.7408 | 0.7413 |
| 0.4902 | 38.32 | 8200 | 0.5360 | 0.7425 | 0.7425 |
| 0.4922 | 39.25 | 8400 | 0.5329 | 0.7429 | 0.7428 |
| 0.4945 | 40.19 | 8600 | 0.5353 | 0.7409 | 0.7408 |
| 0.4909 | 41.12 | 8800 | 0.5414 | 0.7419 | 0.7422 |
| 0.4882 | 42.06 | 9000 | 0.5362 | 0.7408 | 0.7408 |
| 0.4898 | 42.99 | 9200 | 0.5449 | 0.7430 | 0.7434 |
| 0.4889 | 43.93 | 9400 | 0.5376 | 0.7427 | 0.7428 |
| 0.4879 | 44.86 | 9600 | 0.5355 | 0.7416 | 0.7416 |
| 0.4867 | 45.79 | 9800 | 0.5374 | 0.7424 | 0.7425 |
| 0.4924 | 46.73 | 10000 | 0.5380 | 0.7433 | 0.7434 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T01:59:11+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H4ac-seqsight\_16384\_512\_56M-L1\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5400
* F1 Score: 0.7389
* Accuracy: 0.7387
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0
| {"library_name": "peft", "base_model": "Mistral-7B-Instruct-v0.2"} | NandGate1110/mistral-7b-bakery | null | [
"peft",
"safetensors",
"mistral",
"arxiv:1910.09700",
"base_model:Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-04-30T01:59:16+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #mistral #arxiv-1910.09700 #base_model-Mistral-7B-Instruct-v0.2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float32",
"### Framework versions\n\n\n- PEFT 0.6.0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float32",
"### Framework versions\n\n\n- PEFT 0.6.0"
] | [
"TAGS\n#peft #safetensors #mistral #arxiv-1910.09700 #base_model-Mistral-7B-Instruct-v0.2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float32",
"### Framework versions\n\n\n- PEFT 0.6.0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float32",
"### Framework versions\n\n\n- PEFT 0.6.0"
] | [
43,
6,
4,
50,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5,
154,
13,
154,
13
] | [
"TAGS\n#peft #safetensors #mistral #arxiv-1910.09700 #base_model-Mistral-7B-Instruct-v0.2 #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float32### Framework versions\n\n\n- PEFT 0.6.0## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float32### Framework versions\n\n\n- PEFT 0.6.0"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5424
- F1 Score: 0.7384
- Accuracy: 0.7381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.598 | 0.93 | 200 | 0.5625 | 0.7171 | 0.7179 |
| 0.5513 | 1.87 | 400 | 0.5641 | 0.7209 | 0.7229 |
| 0.536 | 2.8 | 600 | 0.5415 | 0.7383 | 0.7381 |
| 0.5305 | 3.74 | 800 | 0.5354 | 0.7355 | 0.7352 |
| 0.526 | 4.67 | 1000 | 0.5325 | 0.7405 | 0.7402 |
| 0.514 | 5.61 | 1200 | 0.5421 | 0.7363 | 0.7370 |
| 0.5144 | 6.54 | 1400 | 0.5380 | 0.7371 | 0.7375 |
| 0.4999 | 7.48 | 1600 | 0.5358 | 0.7453 | 0.7452 |
| 0.5078 | 8.41 | 1800 | 0.5257 | 0.7483 | 0.7481 |
| 0.5022 | 9.35 | 2000 | 0.5268 | 0.7487 | 0.7484 |
| 0.4926 | 10.28 | 2200 | 0.5264 | 0.7454 | 0.7452 |
| 0.4939 | 11.21 | 2400 | 0.5519 | 0.7339 | 0.7355 |
| 0.4868 | 12.15 | 2600 | 0.5432 | 0.7401 | 0.7408 |
| 0.4841 | 13.08 | 2800 | 0.5397 | 0.7461 | 0.7460 |
| 0.4847 | 14.02 | 3000 | 0.5271 | 0.7430 | 0.7431 |
| 0.4782 | 14.95 | 3200 | 0.5273 | 0.7484 | 0.7481 |
| 0.4763 | 15.89 | 3400 | 0.5244 | 0.7534 | 0.7531 |
| 0.4726 | 16.82 | 3600 | 0.5343 | 0.7436 | 0.7437 |
| 0.474 | 17.76 | 3800 | 0.5673 | 0.7270 | 0.7296 |
| 0.4703 | 18.69 | 4000 | 0.5288 | 0.7443 | 0.7440 |
| 0.4653 | 19.63 | 4200 | 0.5236 | 0.7454 | 0.7452 |
| 0.4639 | 20.56 | 4400 | 0.5356 | 0.7444 | 0.7443 |
| 0.4622 | 21.5 | 4600 | 0.5348 | 0.7427 | 0.7431 |
| 0.4596 | 22.43 | 4800 | 0.5321 | 0.7449 | 0.7446 |
| 0.4561 | 23.36 | 5000 | 0.5373 | 0.7439 | 0.7437 |
| 0.458 | 24.3 | 5200 | 0.5286 | 0.7464 | 0.7463 |
| 0.454 | 25.23 | 5400 | 0.5276 | 0.7507 | 0.7504 |
| 0.4527 | 26.17 | 5600 | 0.5275 | 0.7454 | 0.7452 |
| 0.4511 | 27.1 | 5800 | 0.5334 | 0.7457 | 0.7455 |
| 0.4405 | 28.04 | 6000 | 0.5433 | 0.7466 | 0.7463 |
| 0.4505 | 28.97 | 6200 | 0.5300 | 0.7490 | 0.7487 |
| 0.4461 | 29.91 | 6400 | 0.5396 | 0.7477 | 0.7475 |
| 0.4465 | 30.84 | 6600 | 0.5380 | 0.7435 | 0.7437 |
| 0.4421 | 31.78 | 6800 | 0.5272 | 0.7466 | 0.7463 |
| 0.4398 | 32.71 | 7000 | 0.5429 | 0.7438 | 0.7437 |
| 0.4378 | 33.64 | 7200 | 0.5481 | 0.7425 | 0.7428 |
| 0.4374 | 34.58 | 7400 | 0.5395 | 0.7477 | 0.7475 |
| 0.433 | 35.51 | 7600 | 0.5425 | 0.7427 | 0.7425 |
| 0.4309 | 36.45 | 7800 | 0.5489 | 0.7467 | 0.7466 |
| 0.4355 | 37.38 | 8000 | 0.5436 | 0.7482 | 0.7481 |
| 0.4284 | 38.32 | 8200 | 0.5459 | 0.7502 | 0.7501 |
| 0.4317 | 39.25 | 8400 | 0.5448 | 0.7428 | 0.7425 |
| 0.4327 | 40.19 | 8600 | 0.5481 | 0.7469 | 0.7466 |
| 0.4287 | 41.12 | 8800 | 0.5515 | 0.7480 | 0.7481 |
| 0.4256 | 42.06 | 9000 | 0.5487 | 0.7515 | 0.7513 |
| 0.427 | 42.99 | 9200 | 0.5510 | 0.7469 | 0.7469 |
| 0.425 | 43.93 | 9400 | 0.5452 | 0.7495 | 0.7493 |
| 0.4242 | 44.86 | 9600 | 0.5466 | 0.7498 | 0.7496 |
| 0.4253 | 45.79 | 9800 | 0.5469 | 0.7500 | 0.7499 |
| 0.4268 | 46.73 | 10000 | 0.5457 | 0.7500 | 0.7499 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T02:00:22+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H4ac-seqsight\_16384\_512\_56M-L8\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5424
* F1 Score: 0.7384
* Accuracy: 0.7381
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biomistral-7b-dpo-full-sft-wo-healthsearch_qa
This model is a fine-tuned version of [Minbyul/biomistral-7b-wo-healthsearch_qa-sft](https://huggingface.co/Minbyul/biomistral-7b-wo-healthsearch_qa-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6929
- Rewards/chosen: 0.0003
- Rewards/rejected: -0.0003
- Rewards/accuracies: 0.5394
- Rewards/margins: 0.0007
- Logps/rejected: -1184.0101
- Logps/chosen: -767.6729
- Logits/rejected: -3.1682
- Logits/chosen: -3.2170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/biomistral-7b-wo-healthsearch_qa-sft", "model-index": [{"name": "biomistral-7b-dpo-full-sft-wo-healthsearch_qa", "results": []}]} | Minbyul/biomistral-7b-dpo-full-sft-wo-healthsearch_qa | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:Minbyul/biomistral-7b-wo-healthsearch_qa-sft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T02:01:03+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/biomistral-7b-wo-healthsearch_qa-sft #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# biomistral-7b-dpo-full-sft-wo-healthsearch_qa
This model is a fine-tuned version of Minbyul/biomistral-7b-wo-healthsearch_qa-sft on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6929
- Rewards/chosen: 0.0003
- Rewards/rejected: -0.0003
- Rewards/accuracies: 0.5394
- Rewards/margins: 0.0007
- Logps/rejected: -1184.0101
- Logps/chosen: -767.6729
- Logits/rejected: -3.1682
- Logits/chosen: -3.2170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# biomistral-7b-dpo-full-sft-wo-healthsearch_qa\n\nThis model is a fine-tuned version of Minbyul/biomistral-7b-wo-healthsearch_qa-sft on the HuggingFaceH4/ultrafeedback_binarized dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6929\n- Rewards/chosen: 0.0003\n- Rewards/rejected: -0.0003\n- Rewards/accuracies: 0.5394\n- Rewards/margins: 0.0007\n- Logps/rejected: -1184.0101\n- Logps/chosen: -767.6729\n- Logits/rejected: -3.1682\n- Logits/chosen: -3.2170",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.0.dev0\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/biomistral-7b-wo-healthsearch_qa-sft #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# biomistral-7b-dpo-full-sft-wo-healthsearch_qa\n\nThis model is a fine-tuned version of Minbyul/biomistral-7b-wo-healthsearch_qa-sft on the HuggingFaceH4/ultrafeedback_binarized dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6929\n- Rewards/chosen: 0.0003\n- Rewards/rejected: -0.0003\n- Rewards/accuracies: 0.5394\n- Rewards/margins: 0.0007\n- Logps/rejected: -1184.0101\n- Logps/chosen: -767.6729\n- Logits/rejected: -3.1682\n- Logits/chosen: -3.2170",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.0.dev0\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
105,
176,
7,
9,
9,
4,
155,
5,
43
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/biomistral-7b-wo-healthsearch_qa-sft #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# biomistral-7b-dpo-full-sft-wo-healthsearch_qa\n\nThis model is a fine-tuned version of Minbyul/biomistral-7b-wo-healthsearch_qa-sft on the HuggingFaceH4/ultrafeedback_binarized dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6929\n- Rewards/chosen: 0.0003\n- Rewards/rejected: -0.0003\n- Rewards/accuracies: 0.5394\n- Rewards/margins: 0.0007\n- Logps/rejected: -1184.0101\n- Logps/chosen: -767.6729\n- Logits/rejected: -3.1682\n- Logits/chosen: -3.2170## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.39.0.dev0\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tduch/gemma-7b-it-adapters-alex-street | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:01:28+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5443
- F1 Score: 0.7415
- Accuracy: 0.7413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5865 | 0.93 | 200 | 0.5616 | 0.7236 | 0.7246 |
| 0.5409 | 1.87 | 400 | 0.5523 | 0.7232 | 0.7246 |
| 0.5243 | 2.8 | 600 | 0.5323 | 0.7402 | 0.7399 |
| 0.5133 | 3.74 | 800 | 0.5351 | 0.7370 | 0.7372 |
| 0.5091 | 4.67 | 1000 | 0.5197 | 0.7474 | 0.7472 |
| 0.4952 | 5.61 | 1200 | 0.5306 | 0.7454 | 0.7455 |
| 0.4908 | 6.54 | 1400 | 0.5291 | 0.7436 | 0.7437 |
| 0.4748 | 7.48 | 1600 | 0.5288 | 0.7397 | 0.7396 |
| 0.4777 | 8.41 | 1800 | 0.5187 | 0.7454 | 0.7452 |
| 0.4683 | 9.35 | 2000 | 0.5285 | 0.7318 | 0.7328 |
| 0.4579 | 10.28 | 2200 | 0.5254 | 0.7501 | 0.7499 |
| 0.4525 | 11.21 | 2400 | 0.5367 | 0.7453 | 0.7452 |
| 0.4419 | 12.15 | 2600 | 0.5284 | 0.7412 | 0.7416 |
| 0.4354 | 13.08 | 2800 | 0.5425 | 0.7490 | 0.7487 |
| 0.4326 | 14.02 | 3000 | 0.5501 | 0.7409 | 0.7413 |
| 0.425 | 14.95 | 3200 | 0.5560 | 0.7504 | 0.7501 |
| 0.4155 | 15.89 | 3400 | 0.5385 | 0.7507 | 0.7504 |
| 0.4054 | 16.82 | 3600 | 0.5621 | 0.7375 | 0.7372 |
| 0.4034 | 17.76 | 3800 | 0.6042 | 0.7287 | 0.7314 |
| 0.3951 | 18.69 | 4000 | 0.5603 | 0.7334 | 0.7334 |
| 0.3892 | 19.63 | 4200 | 0.5567 | 0.7455 | 0.7452 |
| 0.38 | 20.56 | 4400 | 0.5779 | 0.7408 | 0.7405 |
| 0.376 | 21.5 | 4600 | 0.5861 | 0.7414 | 0.7413 |
| 0.3681 | 22.43 | 4800 | 0.5816 | 0.7367 | 0.7364 |
| 0.3586 | 23.36 | 5000 | 0.6062 | 0.7376 | 0.7378 |
| 0.3575 | 24.3 | 5200 | 0.5973 | 0.7431 | 0.7428 |
| 0.3537 | 25.23 | 5400 | 0.5922 | 0.7384 | 0.7381 |
| 0.3443 | 26.17 | 5600 | 0.5948 | 0.7375 | 0.7372 |
| 0.341 | 27.1 | 5800 | 0.6103 | 0.7323 | 0.7323 |
| 0.3265 | 28.04 | 6000 | 0.6109 | 0.7393 | 0.7390 |
| 0.3317 | 28.97 | 6200 | 0.6055 | 0.7329 | 0.7326 |
| 0.3274 | 29.91 | 6400 | 0.6146 | 0.7270 | 0.7267 |
| 0.3222 | 30.84 | 6600 | 0.6171 | 0.7323 | 0.7320 |
| 0.3159 | 31.78 | 6800 | 0.5983 | 0.7299 | 0.7296 |
| 0.3057 | 32.71 | 7000 | 0.6538 | 0.7258 | 0.7255 |
| 0.3081 | 33.64 | 7200 | 0.6444 | 0.7245 | 0.7243 |
| 0.3031 | 34.58 | 7400 | 0.6478 | 0.7320 | 0.7317 |
| 0.299 | 35.51 | 7600 | 0.6399 | 0.7263 | 0.7261 |
| 0.2883 | 36.45 | 7800 | 0.6671 | 0.7349 | 0.7346 |
| 0.2941 | 37.38 | 8000 | 0.6549 | 0.7273 | 0.7270 |
| 0.2869 | 38.32 | 8200 | 0.6615 | 0.7320 | 0.7317 |
| 0.2848 | 39.25 | 8400 | 0.6594 | 0.7293 | 0.7290 |
| 0.2852 | 40.19 | 8600 | 0.6697 | 0.7323 | 0.7320 |
| 0.2811 | 41.12 | 8800 | 0.6715 | 0.7291 | 0.7287 |
| 0.2754 | 42.06 | 9000 | 0.6837 | 0.7296 | 0.7293 |
| 0.278 | 42.99 | 9200 | 0.6753 | 0.7314 | 0.7311 |
| 0.2715 | 43.93 | 9400 | 0.6735 | 0.7257 | 0.7255 |
| 0.2657 | 44.86 | 9600 | 0.6834 | 0.7284 | 0.7282 |
| 0.2685 | 45.79 | 9800 | 0.6874 | 0.7296 | 0.7293 |
| 0.2717 | 46.73 | 10000 | 0.6834 | 0.7284 | 0.7282 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T02:02:06+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H4ac-seqsight\_16384\_512\_56M-L32\_f
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5443
* F1 Score: 0.7415
* Accuracy: 0.7413
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA11
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6441 | 0.09 | 10 | 0.2795 |
| 0.1859 | 0.18 | 20 | 0.1565 |
| 0.1509 | 0.27 | 30 | 0.1657 |
| 0.1581 | 0.36 | 40 | 0.1509 |
| 0.1497 | 0.45 | 50 | 0.1504 |
| 0.1514 | 0.54 | 60 | 0.1496 |
| 0.1497 | 0.63 | 70 | 0.1472 |
| 0.1487 | 0.73 | 80 | 0.1529 |
| 0.1465 | 0.82 | 90 | 0.1488 |
| 0.149 | 0.91 | 100 | 0.1478 |
| 0.1511 | 1.0 | 110 | 0.1483 |
| 0.1438 | 1.09 | 120 | 0.1352 |
| 0.1382 | 1.18 | 130 | 0.1203 |
| 0.59 | 1.27 | 140 | 2.9085 |
| 0.602 | 1.36 | 150 | 1.5195 |
| 6.4792 | 1.45 | 160 | 5.2383 |
| 2.3451 | 1.54 | 170 | 0.7049 |
| 1.0846 | 1.63 | 180 | 0.6462 |
| 0.5224 | 1.72 | 190 | 0.3806 |
| 0.3875 | 1.81 | 200 | 0.2835 |
| 0.2533 | 1.9 | 210 | 0.2670 |
| 0.2265 | 1.99 | 220 | 0.2117 |
| 0.1544 | 2.08 | 230 | 0.1180 |
| 0.1085 | 2.18 | 240 | 0.0898 |
| 0.0812 | 2.27 | 250 | 0.0735 |
| 0.0721 | 2.36 | 260 | 0.0757 |
| 0.0719 | 2.45 | 270 | 0.0617 |
| 0.0545 | 2.54 | 280 | 0.0565 |
| 0.0479 | 2.63 | 290 | 0.0479 |
| 0.0458 | 2.72 | 300 | 0.0403 |
| 0.0316 | 2.81 | 310 | 0.0371 |
| 0.0298 | 2.9 | 320 | 0.0362 |
| 0.0346 | 2.99 | 330 | 0.0353 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA11", "results": []}]} | Litzy619/O0428HMA11 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T02:02:44+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA11
==========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0353
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
35,
160,
5,
47
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cilantro9246/w2vxdwf | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T02:03:29+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"} | Charishma27/sft_mistral_709_steps_3_apple_sampled_epoch | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-04-30T02:05:03+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
44,
6,
4,
50,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5,
13
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact### Framework versions\n\n- PEFT 0.10.0"
] |
null | transformers |
# Uploaded model
- **Developed by:** dmorrigan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | dmorrigan/HebrewLyricsLoRA-40K-5Epoch | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:06:01+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: dmorrigan
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: dmorrigan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: dmorrigan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
64,
80
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: dmorrigan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/nlfv3uy | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T02:06:39+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# financial-sentiment-model-1000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7525
- Accuracy: 0.7
- F1: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "financial-sentiment-model-1000-samples", "results": []}]} | kevinwlip/financial-sentiment-model-1000-samples | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:07:45+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# financial-sentiment-model-1000-samples
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7525
- Accuracy: 0.7
- F1: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# financial-sentiment-model-1000-samples\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7525\n- Accuracy: 0.7\n- F1: 0.7",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# financial-sentiment-model-1000-samples\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7525\n- Accuracy: 0.7\n- F1: 0.7",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
59,
63,
7,
9,
9,
4,
93,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# financial-sentiment-model-1000-samples\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7525\n- Accuracy: 0.7\n- F1: 0.7## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2### Training results### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
summarization | transformers |
# indobart-small
This model is a fine-tuned version of [bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on [Liputan6](https://paperswithcode.com/dataset/liputan6) dataset.
See demo model here [notebook](https://colab.research.google.com/drive/1bcqS42M3e5IySPYtAa-S4UeyJczg9DXh?usp=sharing).
## Training procedure
### Training hyperparameters
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | R1 Precision | R1 Recall | R1 Fmeasure | R2 Precision | R2 Recall | R2 Fmeasure | Rl Precision | Rl Recall | Rl Fmeasure |
|:-------------:|:-----:|:------------:|:---------:|:-----------:|:------------:|:---------:|:-----------:|:------------:|:---------:|:-----------:|
| 0.3064 | 1.0 | 0.3487 | 0.6043 | 0.4375 | 0.1318 | 0.2613 | 0.1723 | 0.3349 | 0.5833 | 0.4208 |
## Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# Load model and tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("gaduhhartawan/indobart-base")
tokenizer = AutoTokenizer.from_pretrained("gaduhhartawan/indobart-base")
# Input article for summarization
ARTICLE_TO_SUMMARIZE = "lorem ipsum..."
# Generate summary
input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt')
summary_ids = model.generate(input_ids,
min_length=30,
max_length=150,
num_beams=2,
repetition_penalty=2.0,
length_penalty=0.8,
early_stopping=True,
no_repeat_ngram_size=2,
use_cache=True,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95)
# Decode the summary
summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print("Summary: ", summary_text)
``` | {"language": ["id"], "license": "mit", "tags": ["bart"], "datasets": ["id_liputan6"], "metrics": ["rouge"], "pipeline_tag": "summarization"} | gaduhhartawan/indobart-base | null | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"id",
"dataset:id_liputan6",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:08:33+00:00 | [] | [
"id"
] | TAGS
#transformers #safetensors #bart #text2text-generation #summarization #id #dataset-id_liputan6 #license-mit #autotrain_compatible #endpoints_compatible #region-us
| indobart-small
==============
This model is a fine-tuned version of bart-large-cnn on Liputan6 dataset.
See demo model here notebook.
Training procedure
------------------
### Training hyperparameters
* learning\_rate: 0.0001
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
Framework versions
------------------
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
Usage
-----
| [
"### Training hyperparameters\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results\n\n\n\nFramework versions\n------------------\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1\n\n\nUsage\n-----"
] | [
"TAGS\n#transformers #safetensors #bart #text2text-generation #summarization #id #dataset-id_liputan6 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results\n\n\n\nFramework versions\n------------------\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1\n\n\nUsage\n-----"
] | [
49,
90,
70
] | [
"TAGS\n#transformers #safetensors #bart #text2text-generation #summarization #id #dataset-id_liputan6 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1### Training results\n\n\n\nFramework versions\n------------------\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1\n\n\nUsage\n-----"
] |
text-generation | transformers | <a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a>
# Llama-3 8B Gradient Instruct 1048k
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected].
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.

**Approach:**
- [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
- Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below)
**Infra:**
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
**Data:**
For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
**Progressive Training Details:**
| | 65K | 262K | 524k | 1048k |
|------------------------|-----------|-----------|-----------|-----------|
| Initialize From | LLaMA-3 8B| 65K | 262K | 524k |
| Sequence Length 2^N | 16 | 18 | 19 | 20 |
| RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B |
| Batch Size | 1 | 1 | 16 | 16 |
| Gradient Accumulation Steps | 32 | 16 | 1 | 1 |
| Steps | 30 | 24 | 50 | 50 |
| Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 |
| Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
| # GPUs | 8 | 32 | 512 | 512 |
| GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
| Minutes to Train (Wall)| 202 | 555 | 61 | 87 |
**Quants**:
- [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF)
- [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit)
## The Gradient AI Team
https://gradient.ai/
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
## Contact Us
Drop an email to [[email protected]](mailto:[email protected])
## References
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] https://github.com/jzhang38/EasyContext
----
# Base Model
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos | {"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"} | blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw4.2-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T02:08:48+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| [<img src="URL width="200"/>](URL)
Llama-3 8B Gradient Instruct 1048k
==================================
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@URL.
For more info see our End-to-end development service for custom LLMs and AI systems
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from Crusoe Energy. It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
!image/png
Approach:
* meta-llama/Meta-Llama-3-8B-Instruct as the base
* NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
* Progressive training on increasing context lengths, similar to Large World Model [2] (See details below)
Infra:
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on Crusoe Energy high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
Data:
For training data, we generate long contexts by augmenting SlimPajama.
Progressive Training Details:
Quants:
* GGUF
* MLX-4bit
The Gradient AI Team
--------------------
URL
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
Contact Us
----------
Drop an email to contact@URL
References
----------
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] URL
---
Base Model
==========
Model Details
-------------
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date April 18, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: URL
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
Intended Use
------------
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
How to use
----------
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original 'llama3' codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.
#### Transformers pipeline
#### Transformers AutoModelForCausalLM
### Use with 'llama3'
Please, follow the instructions in the repository
To download Original checkpoints, see the example command below leveraging 'huggingface-cli':
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
Hardware and Software
---------------------
Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.
CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
-------------
Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
Benchmarks
----------
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.
### Base pretrained models
### Instruction tuned models
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
Safety
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
Refusals
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL
#### Critical risks
CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### Cyber Security
We have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.
### Child Safety
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.
Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.
Ethical Considerations and Limitations
--------------------------------------
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at URL
instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {URL
}
Contributors
------------
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
| [
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
52,
42,
6,
13,
429,
8,
6,
270,
280,
72,
115,
118,
126,
2136
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.#### Transformers pipeline#### Transformers AutoModelForCausalLM### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.### Base pretrained models### Instruction tuned models### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4296
- F1 Score: 0.8218
- Accuracy: 0.8225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4925 | 1.1 | 200 | 0.4580 | 0.8028 | 0.8027 |
| 0.4557 | 2.21 | 400 | 0.4510 | 0.8060 | 0.8072 |
| 0.4499 | 3.31 | 600 | 0.4464 | 0.8042 | 0.8058 |
| 0.4395 | 4.42 | 800 | 0.4424 | 0.8066 | 0.8079 |
| 0.4391 | 5.52 | 1000 | 0.4490 | 0.8007 | 0.8027 |
| 0.4291 | 6.63 | 1200 | 0.4541 | 0.7968 | 0.7992 |
| 0.4313 | 7.73 | 1400 | 0.4366 | 0.8060 | 0.8072 |
| 0.4228 | 8.84 | 1600 | 0.4589 | 0.7947 | 0.7979 |
| 0.4228 | 9.94 | 1800 | 0.4297 | 0.8143 | 0.8145 |
| 0.4193 | 11.05 | 2000 | 0.4448 | 0.8044 | 0.8058 |
| 0.4188 | 12.15 | 2200 | 0.4314 | 0.8130 | 0.8135 |
| 0.4139 | 13.26 | 2400 | 0.4306 | 0.8092 | 0.8100 |
| 0.415 | 14.36 | 2600 | 0.4272 | 0.8132 | 0.8138 |
| 0.4126 | 15.47 | 2800 | 0.4396 | 0.8075 | 0.8089 |
| 0.4105 | 16.57 | 3000 | 0.4327 | 0.8148 | 0.8148 |
| 0.4098 | 17.68 | 3200 | 0.4307 | 0.8124 | 0.8131 |
| 0.405 | 18.78 | 3400 | 0.4389 | 0.8098 | 0.8110 |
| 0.4054 | 19.89 | 3600 | 0.4358 | 0.8099 | 0.8110 |
| 0.4054 | 20.99 | 3800 | 0.4408 | 0.8114 | 0.8124 |
| 0.4032 | 22.1 | 4000 | 0.4319 | 0.8084 | 0.8096 |
| 0.4011 | 23.2 | 4200 | 0.4315 | 0.8134 | 0.8141 |
| 0.4006 | 24.31 | 4400 | 0.4423 | 0.8098 | 0.8114 |
| 0.3961 | 25.41 | 4600 | 0.4382 | 0.8149 | 0.8159 |
| 0.4012 | 26.52 | 4800 | 0.4318 | 0.8161 | 0.8169 |
| 0.4009 | 27.62 | 5000 | 0.4319 | 0.8166 | 0.8176 |
| 0.3955 | 28.73 | 5200 | 0.4295 | 0.8145 | 0.8155 |
| 0.3934 | 29.83 | 5400 | 0.4325 | 0.8141 | 0.8148 |
| 0.3945 | 30.94 | 5600 | 0.4320 | 0.8162 | 0.8169 |
| 0.3929 | 32.04 | 5800 | 0.4342 | 0.8157 | 0.8162 |
| 0.3925 | 33.15 | 6000 | 0.4293 | 0.8156 | 0.8166 |
| 0.3931 | 34.25 | 6200 | 0.4330 | 0.8134 | 0.8141 |
| 0.3883 | 35.36 | 6400 | 0.4372 | 0.8167 | 0.8176 |
| 0.3917 | 36.46 | 6600 | 0.4272 | 0.8188 | 0.8193 |
| 0.3895 | 37.57 | 6800 | 0.4318 | 0.8156 | 0.8166 |
| 0.3889 | 38.67 | 7000 | 0.4313 | 0.8174 | 0.8183 |
| 0.385 | 39.78 | 7200 | 0.4342 | 0.8164 | 0.8173 |
| 0.3904 | 40.88 | 7400 | 0.4298 | 0.8154 | 0.8159 |
| 0.3863 | 41.99 | 7600 | 0.4323 | 0.8161 | 0.8169 |
| 0.3862 | 43.09 | 7800 | 0.4362 | 0.8164 | 0.8173 |
| 0.3872 | 44.2 | 8000 | 0.4349 | 0.8151 | 0.8162 |
| 0.3857 | 45.3 | 8200 | 0.4290 | 0.8170 | 0.8176 |
| 0.382 | 46.41 | 8400 | 0.4305 | 0.8174 | 0.8180 |
| 0.3883 | 47.51 | 8600 | 0.4331 | 0.8169 | 0.8180 |
| 0.3808 | 48.62 | 8800 | 0.4348 | 0.8162 | 0.8173 |
| 0.3836 | 49.72 | 9000 | 0.4346 | 0.8162 | 0.8173 |
| 0.385 | 50.83 | 9200 | 0.4380 | 0.8141 | 0.8155 |
| 0.3831 | 51.93 | 9400 | 0.4341 | 0.8155 | 0.8166 |
| 0.3824 | 53.04 | 9600 | 0.4324 | 0.8171 | 0.8180 |
| 0.3803 | 54.14 | 9800 | 0.4326 | 0.8161 | 0.8169 |
| 0.382 | 55.25 | 10000 | 0.4344 | 0.8159 | 0.8169 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T02:10:08+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_16384\_512\_56M-L1\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4296
* F1 Score: 0.8218
* Accuracy: 0.8225
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4254
- F1 Score: 0.8273
- Accuracy: 0.8277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4821 | 1.1 | 200 | 0.4463 | 0.8110 | 0.8110 |
| 0.4456 | 2.21 | 400 | 0.4363 | 0.8128 | 0.8135 |
| 0.4377 | 3.31 | 600 | 0.4392 | 0.8029 | 0.8048 |
| 0.4231 | 4.42 | 800 | 0.4418 | 0.8026 | 0.8041 |
| 0.4221 | 5.52 | 1000 | 0.4398 | 0.8081 | 0.8100 |
| 0.4099 | 6.63 | 1200 | 0.4558 | 0.8065 | 0.8089 |
| 0.4116 | 7.73 | 1400 | 0.4356 | 0.8135 | 0.8152 |
| 0.4011 | 8.84 | 1600 | 0.4595 | 0.8074 | 0.8103 |
| 0.3996 | 9.94 | 1800 | 0.4245 | 0.8146 | 0.8152 |
| 0.3953 | 11.05 | 2000 | 0.4438 | 0.8073 | 0.8079 |
| 0.3926 | 12.15 | 2200 | 0.4207 | 0.8227 | 0.8232 |
| 0.3855 | 13.26 | 2400 | 0.4189 | 0.8243 | 0.8249 |
| 0.3876 | 14.36 | 2600 | 0.4192 | 0.8281 | 0.8284 |
| 0.3807 | 15.47 | 2800 | 0.4265 | 0.8216 | 0.8225 |
| 0.3775 | 16.57 | 3000 | 0.4232 | 0.8248 | 0.8249 |
| 0.3745 | 17.68 | 3200 | 0.4212 | 0.8239 | 0.8245 |
| 0.3687 | 18.78 | 3400 | 0.4597 | 0.8051 | 0.8083 |
| 0.3681 | 19.89 | 3600 | 0.4259 | 0.8195 | 0.8207 |
| 0.364 | 20.99 | 3800 | 0.4339 | 0.8158 | 0.8173 |
| 0.3606 | 22.1 | 4000 | 0.4220 | 0.8201 | 0.8204 |
| 0.3589 | 23.2 | 4200 | 0.4268 | 0.8186 | 0.8193 |
| 0.3531 | 24.31 | 4400 | 0.4384 | 0.8144 | 0.8162 |
| 0.3495 | 25.41 | 4600 | 0.4317 | 0.8262 | 0.8263 |
| 0.3546 | 26.52 | 4800 | 0.4296 | 0.8186 | 0.8193 |
| 0.3484 | 27.62 | 5000 | 0.4367 | 0.8198 | 0.8214 |
| 0.3459 | 28.73 | 5200 | 0.4349 | 0.8184 | 0.8197 |
| 0.3405 | 29.83 | 5400 | 0.4344 | 0.8154 | 0.8162 |
| 0.3405 | 30.94 | 5600 | 0.4304 | 0.8230 | 0.8239 |
| 0.3381 | 32.04 | 5800 | 0.4300 | 0.8195 | 0.8197 |
| 0.3366 | 33.15 | 6000 | 0.4373 | 0.8240 | 0.8252 |
| 0.335 | 34.25 | 6200 | 0.4381 | 0.8191 | 0.8193 |
| 0.3281 | 35.36 | 6400 | 0.4550 | 0.8225 | 0.8235 |
| 0.3323 | 36.46 | 6600 | 0.4338 | 0.8224 | 0.8232 |
| 0.3295 | 37.57 | 6800 | 0.4406 | 0.8192 | 0.8204 |
| 0.3261 | 38.67 | 7000 | 0.4415 | 0.8204 | 0.8214 |
| 0.3243 | 39.78 | 7200 | 0.4425 | 0.8224 | 0.8235 |
| 0.3262 | 40.88 | 7400 | 0.4315 | 0.8198 | 0.8200 |
| 0.3232 | 41.99 | 7600 | 0.4392 | 0.8171 | 0.8183 |
| 0.3241 | 43.09 | 7800 | 0.4418 | 0.8228 | 0.8235 |
| 0.3202 | 44.2 | 8000 | 0.4426 | 0.8187 | 0.8197 |
| 0.3201 | 45.3 | 8200 | 0.4383 | 0.8210 | 0.8214 |
| 0.3166 | 46.41 | 8400 | 0.4383 | 0.8208 | 0.8214 |
| 0.3186 | 47.51 | 8600 | 0.4454 | 0.8218 | 0.8228 |
| 0.3102 | 48.62 | 8800 | 0.4445 | 0.8212 | 0.8221 |
| 0.3143 | 49.72 | 9000 | 0.4470 | 0.8209 | 0.8218 |
| 0.3164 | 50.83 | 9200 | 0.4476 | 0.8190 | 0.8204 |
| 0.3113 | 51.93 | 9400 | 0.4463 | 0.8208 | 0.8218 |
| 0.3099 | 53.04 | 9600 | 0.4432 | 0.8211 | 0.8218 |
| 0.3081 | 54.14 | 9800 | 0.4443 | 0.8208 | 0.8214 |
| 0.3096 | 55.25 | 10000 | 0.4462 | 0.8220 | 0.8228 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T02:12:00+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_16384\_512\_56M-L8\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4254
* F1 Score: 0.8273
* Accuracy: 0.8277
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4267
- F1 Score: 0.8228
- Accuracy: 0.8232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4764 | 1.1 | 200 | 0.4380 | 0.8112 | 0.8114 |
| 0.4381 | 2.21 | 400 | 0.4305 | 0.8106 | 0.8117 |
| 0.4268 | 3.31 | 600 | 0.4299 | 0.8055 | 0.8065 |
| 0.4129 | 4.42 | 800 | 0.4419 | 0.8070 | 0.8093 |
| 0.4092 | 5.52 | 1000 | 0.4268 | 0.8149 | 0.8166 |
| 0.3941 | 6.63 | 1200 | 0.4522 | 0.8068 | 0.8096 |
| 0.3919 | 7.73 | 1400 | 0.4270 | 0.8182 | 0.8197 |
| 0.3788 | 8.84 | 1600 | 0.4612 | 0.8045 | 0.8079 |
| 0.3739 | 9.94 | 1800 | 0.4191 | 0.8281 | 0.8287 |
| 0.3658 | 11.05 | 2000 | 0.4359 | 0.8158 | 0.8159 |
| 0.3602 | 12.15 | 2200 | 0.4162 | 0.8307 | 0.8311 |
| 0.3471 | 13.26 | 2400 | 0.4247 | 0.8229 | 0.8239 |
| 0.3454 | 14.36 | 2600 | 0.4207 | 0.8289 | 0.8291 |
| 0.3342 | 15.47 | 2800 | 0.4371 | 0.8172 | 0.8180 |
| 0.3245 | 16.57 | 3000 | 0.4329 | 0.8222 | 0.8221 |
| 0.3179 | 17.68 | 3200 | 0.4430 | 0.8146 | 0.8152 |
| 0.3075 | 18.78 | 3400 | 0.4965 | 0.7971 | 0.8003 |
| 0.3012 | 19.89 | 3600 | 0.4450 | 0.8216 | 0.8225 |
| 0.2906 | 20.99 | 3800 | 0.4661 | 0.8151 | 0.8162 |
| 0.2801 | 22.1 | 4000 | 0.4618 | 0.8218 | 0.8218 |
| 0.2748 | 23.2 | 4200 | 0.4734 | 0.8115 | 0.8124 |
| 0.2642 | 24.31 | 4400 | 0.5041 | 0.8032 | 0.8044 |
| 0.2551 | 25.41 | 4600 | 0.5074 | 0.8081 | 0.8089 |
| 0.2536 | 26.52 | 4800 | 0.5061 | 0.7931 | 0.7947 |
| 0.2485 | 27.62 | 5000 | 0.5218 | 0.8000 | 0.8020 |
| 0.2397 | 28.73 | 5200 | 0.4901 | 0.8071 | 0.8083 |
| 0.2293 | 29.83 | 5400 | 0.5268 | 0.7981 | 0.7992 |
| 0.2272 | 30.94 | 5600 | 0.5205 | 0.8129 | 0.8131 |
| 0.218 | 32.04 | 5800 | 0.5089 | 0.8119 | 0.8121 |
| 0.2167 | 33.15 | 6000 | 0.5431 | 0.8035 | 0.8044 |
| 0.2099 | 34.25 | 6200 | 0.5419 | 0.8113 | 0.8114 |
| 0.2042 | 35.36 | 6400 | 0.5599 | 0.8094 | 0.8100 |
| 0.2014 | 36.46 | 6600 | 0.5510 | 0.8078 | 0.8086 |
| 0.1992 | 37.57 | 6800 | 0.5469 | 0.8102 | 0.8107 |
| 0.1888 | 38.67 | 7000 | 0.5835 | 0.8086 | 0.8096 |
| 0.188 | 39.78 | 7200 | 0.5681 | 0.8132 | 0.8141 |
| 0.1853 | 40.88 | 7400 | 0.5798 | 0.8029 | 0.8037 |
| 0.1798 | 41.99 | 7600 | 0.5693 | 0.8074 | 0.8086 |
| 0.1779 | 43.09 | 7800 | 0.5952 | 0.8127 | 0.8135 |
| 0.1745 | 44.2 | 8000 | 0.5988 | 0.8070 | 0.8076 |
| 0.171 | 45.3 | 8200 | 0.5874 | 0.8056 | 0.8062 |
| 0.1648 | 46.41 | 8400 | 0.6126 | 0.8043 | 0.8055 |
| 0.1695 | 47.51 | 8600 | 0.6173 | 0.8072 | 0.8083 |
| 0.1622 | 48.62 | 8800 | 0.6059 | 0.8049 | 0.8055 |
| 0.1594 | 49.72 | 9000 | 0.6308 | 0.8064 | 0.8076 |
| 0.1633 | 50.83 | 9200 | 0.6171 | 0.8004 | 0.8017 |
| 0.1542 | 51.93 | 9400 | 0.6232 | 0.8114 | 0.8121 |
| 0.1529 | 53.04 | 9600 | 0.6267 | 0.8081 | 0.8089 |
| 0.1544 | 54.14 | 9800 | 0.6244 | 0.8083 | 0.8089 |
| 0.1524 | 55.25 | 10000 | 0.6277 | 0.8082 | 0.8089 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T02:12:33+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_16384\_512\_56M-L32\_f
===================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4267
* F1 Score: 0.8228
* Accuracy: 0.8232
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5118
- F1 Score: 0.7666
- Accuracy: 0.7674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5989 | 1.01 | 200 | 0.5886 | 0.7054 | 0.7102 |
| 0.5718 | 2.02 | 400 | 0.5668 | 0.7254 | 0.7273 |
| 0.5573 | 3.03 | 600 | 0.5544 | 0.7293 | 0.7317 |
| 0.5494 | 4.04 | 800 | 0.5505 | 0.7390 | 0.7412 |
| 0.5421 | 5.05 | 1000 | 0.5408 | 0.7439 | 0.7453 |
| 0.5367 | 6.06 | 1200 | 0.5384 | 0.7451 | 0.7475 |
| 0.5328 | 7.07 | 1400 | 0.5390 | 0.7483 | 0.7506 |
| 0.5322 | 8.08 | 1600 | 0.5394 | 0.7446 | 0.7475 |
| 0.5283 | 9.09 | 1800 | 0.5305 | 0.7548 | 0.7566 |
| 0.525 | 10.1 | 2000 | 0.5294 | 0.7526 | 0.7541 |
| 0.5226 | 11.11 | 2200 | 0.5340 | 0.7504 | 0.7522 |
| 0.5216 | 12.12 | 2400 | 0.5258 | 0.7542 | 0.7554 |
| 0.5188 | 13.13 | 2600 | 0.5317 | 0.7531 | 0.7551 |
| 0.5189 | 14.14 | 2800 | 0.5259 | 0.7528 | 0.7547 |
| 0.5161 | 15.15 | 3000 | 0.5287 | 0.7537 | 0.7557 |
| 0.5174 | 16.16 | 3200 | 0.5241 | 0.7537 | 0.7560 |
| 0.5135 | 17.17 | 3400 | 0.5300 | 0.7546 | 0.7563 |
| 0.5155 | 18.18 | 3600 | 0.5182 | 0.7628 | 0.7639 |
| 0.5124 | 19.19 | 3800 | 0.5212 | 0.7585 | 0.7601 |
| 0.5101 | 20.2 | 4000 | 0.5210 | 0.7597 | 0.7610 |
| 0.5075 | 21.21 | 4200 | 0.5264 | 0.7525 | 0.7551 |
| 0.5097 | 22.22 | 4400 | 0.5239 | 0.7587 | 0.7604 |
| 0.5046 | 23.23 | 4600 | 0.5246 | 0.7530 | 0.7554 |
| 0.5118 | 24.24 | 4800 | 0.5209 | 0.7508 | 0.7538 |
| 0.5044 | 25.25 | 5000 | 0.5164 | 0.7600 | 0.7610 |
| 0.5067 | 26.26 | 5200 | 0.5184 | 0.7642 | 0.7648 |
| 0.5034 | 27.27 | 5400 | 0.5183 | 0.7579 | 0.7598 |
| 0.5061 | 28.28 | 5600 | 0.5151 | 0.7618 | 0.7626 |
| 0.505 | 29.29 | 5800 | 0.5236 | 0.7526 | 0.7560 |
| 0.4997 | 30.3 | 6000 | 0.5172 | 0.7578 | 0.7598 |
| 0.5028 | 31.31 | 6200 | 0.5198 | 0.7574 | 0.7592 |
| 0.5023 | 32.32 | 6400 | 0.5236 | 0.7536 | 0.7566 |
| 0.4991 | 33.33 | 6600 | 0.5221 | 0.7544 | 0.7569 |
| 0.4986 | 34.34 | 6800 | 0.5186 | 0.7566 | 0.7588 |
| 0.4967 | 35.35 | 7000 | 0.5191 | 0.7574 | 0.7592 |
| 0.5004 | 36.36 | 7200 | 0.5165 | 0.7574 | 0.7595 |
| 0.5001 | 37.37 | 7400 | 0.5180 | 0.7551 | 0.7576 |
| 0.499 | 38.38 | 7600 | 0.5176 | 0.7611 | 0.7623 |
| 0.4986 | 39.39 | 7800 | 0.5171 | 0.7564 | 0.7582 |
| 0.4977 | 40.4 | 8000 | 0.5209 | 0.7565 | 0.7585 |
| 0.4964 | 41.41 | 8200 | 0.5190 | 0.7546 | 0.7573 |
| 0.5 | 42.42 | 8400 | 0.5204 | 0.7543 | 0.7573 |
| 0.4965 | 43.43 | 8600 | 0.5198 | 0.7548 | 0.7573 |
| 0.4928 | 44.44 | 8800 | 0.5181 | 0.7585 | 0.7604 |
| 0.4953 | 45.45 | 9000 | 0.5175 | 0.7570 | 0.7588 |
| 0.4932 | 46.46 | 9200 | 0.5196 | 0.7571 | 0.7592 |
| 0.4999 | 47.47 | 9400 | 0.5202 | 0.7530 | 0.7560 |
| 0.4888 | 48.48 | 9600 | 0.5202 | 0.7543 | 0.7566 |
| 0.5001 | 49.49 | 9800 | 0.5192 | 0.7541 | 0.7566 |
| 0.4915 | 50.51 | 10000 | 0.5186 | 0.7550 | 0.7573 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T02:13:10+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_16384\_512\_56M-L1\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5118
* F1 Score: 0.7666
* Accuracy: 0.7674
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Andrew Chahnwoo Park
- **Model type:** LLaMA
- **Language(s) (NLP):** English
- **License:** apache-2.0
- **Finetuned from model:** [TinyLlama/TinyLlama-1.1B-Chat-v1.0](TinyLlama/TinyLlama-1.1B-Chat-v1.0)
### Model Sources
- **Repository:** [TinyLlama/TinyLlama-1.1B-Chat-v1.0](TinyLlama/TinyLlama-1.1B-Chat-v1.0)
- **GitHub:** [TinyLlama](https://github.com/jzhang38/TinyLlama)
## Training Details
### Training Data
[DataBricks Instruction-Tuning Dataset](databricks/databricks-dolly-15k) (5% utilized)
### Training Procedure
1. Tokenize and label data
2. Load LLM
3. Apply Quantized Low-Rank Adaptation (QLoRA) to modules ["q_proj","k_proj","v_proj","o_proj"]
4. Perform training with HuggingFace Trainer
5. Use DataCollatorForSeq2Seq
- Note that this was data collator was chosen over the DataCollatorForLanguageModeling as the latter overwrites pre-defined "labels"
- This overwriting is done by the tf_mask_tokens and torch_mask_tokens functions for [DataCollatorForLanguageModeling](https://github.com/huggingface/transformers/blob/main/src/transformers/data/data_collator.py#L634)
#### Preprocessing
Utilized different instruction prompt templates for each category in the dataset.
##### open_qa
### Instruction:
Answer the question below. Be as specific and concise as possible.
### Question:
{instruction}
### Response:
{response}
##### general_qa
### Instruction:
Answer the question below to the best of your konwledge.
### Question:
{instruction}
### Response:
{response}
##### classification
### Instruction:
You will be given a question and a list of potential answers to that question. You are to select the correct answers out of the available choices.
### Question:
{instruction}
### Response:
{response}
##### closed_qa
### Instruction:
You will be given a question to answer and context that contains pertinent information. Provide a concise and accurate response to the question using the information provided in the context.
### Question:
{instruction}
### Context:
{context}
### Response:
{response}
##### brainstorming
### Instruction:
You will be given a question that does not have a correct answer. You are to brainstorm one possible answer to the provided question.
### Question:
{instruction}
### Response:
{response}
##### information_extraction
### Instruction:
You will be given a question or query and some context that can be used to answer it. You are to extract relevant information from the provided context to provide an accurate response to the given query.
### Question:
{instruction}
### Context:
{context}
### Response:
{response}
##### summarization
### Instruction:
You will be given a question or request and context that can be used for your response. You are to summarize the provided context to provide an answer to the question.
### Question:
{instruction}
### Context:
{context}
### Response:
{response}
##### creative_writing
### Instruction:
You will be given a prompt that you are to write about. Be creative.
### Prompt:
{instruction}
### Response:
{response}"""
#### Labelled Data Format
{
'input_ids' : List[int],
'attention_mask' : List[int],
'labels' : List[int]
}
Where labels were created by masking everything but the "response" with the mask token (-100)
### Hardware
Fine-tuning performed on Google Colab on a single session (T4).
Dataset not fully utilized due to limitations of free session. | {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["databricks/databricks-dolly-15k"]} | Chahnwoo/TinyLlama-1.1B-Chat-v1.0-0.05E-QLoRA-Databricks-SFT-Test_20240430 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-30T02:13:13+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #en #dataset-databricks/databricks-dolly-15k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Andrew Chahnwoo Park
- Model type: LLaMA
- Language(s) (NLP): English
- License: apache-2.0
- Finetuned from model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
### Model Sources
- Repository: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- GitHub: TinyLlama
## Training Details
### Training Data
DataBricks Instruction-Tuning Dataset (5% utilized)
### Training Procedure
1. Tokenize and label data
2. Load LLM
3. Apply Quantized Low-Rank Adaptation (QLoRA) to modules ["q_proj","k_proj","v_proj","o_proj"]
4. Perform training with HuggingFace Trainer
5. Use DataCollatorForSeq2Seq
- Note that this was data collator was chosen over the DataCollatorForLanguageModeling as the latter overwrites pre-defined "labels"
- This overwriting is done by the tf_mask_tokens and torch_mask_tokens functions for DataCollatorForLanguageModeling
#### Preprocessing
Utilized different instruction prompt templates for each category in the dataset.
##### open_qa
### Instruction:
Answer the question below. Be as specific and concise as possible.
### Question:
{instruction}
### Response:
{response}
##### general_qa
### Instruction:
Answer the question below to the best of your konwledge.
### Question:
{instruction}
### Response:
{response}
##### classification
### Instruction:
You will be given a question and a list of potential answers to that question. You are to select the correct answers out of the available choices.
### Question:
{instruction}
### Response:
{response}
##### closed_qa
### Instruction:
You will be given a question to answer and context that contains pertinent information. Provide a concise and accurate response to the question using the information provided in the context.
### Question:
{instruction}
### Context:
{context}
### Response:
{response}
##### brainstorming
### Instruction:
You will be given a question that does not have a correct answer. You are to brainstorm one possible answer to the provided question.
### Question:
{instruction}
### Response:
{response}
##### information_extraction
### Instruction:
You will be given a question or query and some context that can be used to answer it. You are to extract relevant information from the provided context to provide an accurate response to the given query.
### Question:
{instruction}
### Context:
{context}
### Response:
{response}
##### summarization
### Instruction:
You will be given a question or request and context that can be used for your response. You are to summarize the provided context to provide an answer to the question.
### Question:
{instruction}
### Context:
{context}
### Response:
{response}
##### creative_writing
### Instruction:
You will be given a prompt that you are to write about. Be creative.
### Prompt:
{instruction}
### Response:
{response}"""
#### Labelled Data Format
{
'input_ids' : List[int],
'attention_mask' : List[int],
'labels' : List[int]
}
Where labels were created by masking everything but the "response" with the mask token (-100)
### Hardware
Fine-tuning performed on Google Colab on a single session (T4).
Dataset not fully utilized due to limitations of free session. | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Andrew Chahnwoo Park\n- Model type: LLaMA\n- Language(s) (NLP): English\n- License: apache-2.0\n- Finetuned from model: TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"### Model Sources\n\n- Repository: TinyLlama/TinyLlama-1.1B-Chat-v1.0\n- GitHub: TinyLlama",
"## Training Details",
"### Training Data\n\nDataBricks Instruction-Tuning Dataset (5% utilized)",
"### Training Procedure\n\n1. Tokenize and label data\n2. Load LLM\n3. Apply Quantized Low-Rank Adaptation (QLoRA) to modules [\"q_proj\",\"k_proj\",\"v_proj\",\"o_proj\"]\n4. Perform training with HuggingFace Trainer\n5. Use DataCollatorForSeq2Seq\n - Note that this was data collator was chosen over the DataCollatorForLanguageModeling as the latter overwrites pre-defined \"labels\"\n - This overwriting is done by the tf_mask_tokens and torch_mask_tokens functions for DataCollatorForLanguageModeling",
"#### Preprocessing\n\nUtilized different instruction prompt templates for each category in the dataset.",
"##### open_qa\n ### Instruction:\n Answer the question below. Be as specific and concise as possible.\n \n ### Question:\n {instruction}\n \n ### Response:\n {response}",
"##### general_qa\n ### Instruction:\n Answer the question below to the best of your konwledge.\n \n ### Question:\n {instruction}\n \n ### Response:\n {response}",
"##### classification\n\n ### Instruction:\n You will be given a question and a list of potential answers to that question. You are to select the correct answers out of the available choices.\n \n ### Question:\n {instruction}\n \n ### Response:\n {response}",
"##### closed_qa\n\n ### Instruction:\n You will be given a question to answer and context that contains pertinent information. Provide a concise and accurate response to the question using the information provided in the context.\n \n ### Question:\n {instruction}\n \n ### Context:\n {context}\n \n ### Response:\n {response}",
"##### brainstorming\n\n ### Instruction:\n You will be given a question that does not have a correct answer. You are to brainstorm one possible answer to the provided question.\n \n ### Question:\n {instruction}\n \n ### Response:\n {response}",
"##### information_extraction\n\n ### Instruction:\n You will be given a question or query and some context that can be used to answer it. You are to extract relevant information from the provided context to provide an accurate response to the given query.\n \n ### Question:\n {instruction}\n \n ### Context:\n {context}\n \n ### Response:\n {response}",
"##### summarization\n\n ### Instruction:\n You will be given a question or request and context that can be used for your response. You are to summarize the provided context to provide an answer to the question.\n \n ### Question:\n {instruction}\n \n ### Context:\n {context}\n \n ### Response:\n {response}",
"##### creative_writing\n\n ### Instruction:\n You will be given a prompt that you are to write about. Be creative.\n \n ### Prompt:\n {instruction}\n \n ### Response:\n {response}\"\"\"",
"#### Labelled Data Format\n\n {\n 'input_ids' : List[int],\n 'attention_mask' : List[int],\n 'labels' : List[int]\n }\n\nWhere labels were created by masking everything but the \"response\" with the mask token (-100)",
"### Hardware\n\nFine-tuning performed on Google Colab on a single session (T4).\nDataset not fully utilized due to limitations of free session."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #en #dataset-databricks/databricks-dolly-15k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Andrew Chahnwoo Park\n- Model type: LLaMA\n- Language(s) (NLP): English\n- License: apache-2.0\n- Finetuned from model: TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"### Model Sources\n\n- Repository: TinyLlama/TinyLlama-1.1B-Chat-v1.0\n- GitHub: TinyLlama",
"## Training Details",
"### Training Data\n\nDataBricks Instruction-Tuning Dataset (5% utilized)",
"### Training Procedure\n\n1. Tokenize and label data\n2. Load LLM\n3. Apply Quantized Low-Rank Adaptation (QLoRA) to modules [\"q_proj\",\"k_proj\",\"v_proj\",\"o_proj\"]\n4. Perform training with HuggingFace Trainer\n5. Use DataCollatorForSeq2Seq\n - Note that this was data collator was chosen over the DataCollatorForLanguageModeling as the latter overwrites pre-defined \"labels\"\n - This overwriting is done by the tf_mask_tokens and torch_mask_tokens functions for DataCollatorForLanguageModeling",
"#### Preprocessing\n\nUtilized different instruction prompt templates for each category in the dataset.",
"##### open_qa\n ### Instruction:\n Answer the question below. Be as specific and concise as possible.\n \n ### Question:\n {instruction}\n \n ### Response:\n {response}",
"##### general_qa\n ### Instruction:\n Answer the question below to the best of your konwledge.\n \n ### Question:\n {instruction}\n \n ### Response:\n {response}",
"##### classification\n\n ### Instruction:\n You will be given a question and a list of potential answers to that question. You are to select the correct answers out of the available choices.\n \n ### Question:\n {instruction}\n \n ### Response:\n {response}",
"##### closed_qa\n\n ### Instruction:\n You will be given a question to answer and context that contains pertinent information. Provide a concise and accurate response to the question using the information provided in the context.\n \n ### Question:\n {instruction}\n \n ### Context:\n {context}\n \n ### Response:\n {response}",
"##### brainstorming\n\n ### Instruction:\n You will be given a question that does not have a correct answer. You are to brainstorm one possible answer to the provided question.\n \n ### Question:\n {instruction}\n \n ### Response:\n {response}",
"##### information_extraction\n\n ### Instruction:\n You will be given a question or query and some context that can be used to answer it. You are to extract relevant information from the provided context to provide an accurate response to the given query.\n \n ### Question:\n {instruction}\n \n ### Context:\n {context}\n \n ### Response:\n {response}",
"##### summarization\n\n ### Instruction:\n You will be given a question or request and context that can be used for your response. You are to summarize the provided context to provide an answer to the question.\n \n ### Question:\n {instruction}\n \n ### Context:\n {context}\n \n ### Response:\n {response}",
"##### creative_writing\n\n ### Instruction:\n You will be given a prompt that you are to write about. Be creative.\n \n ### Prompt:\n {instruction}\n \n ### Response:\n {response}\"\"\"",
"#### Labelled Data Format\n\n {\n 'input_ids' : List[int],\n 'attention_mask' : List[int],\n 'labels' : List[int]\n }\n\nWhere labels were created by masking everything but the \"response\" with the mask token (-100)",
"### Hardware\n\nFine-tuning performed on Google Colab on a single session (T4).\nDataset not fully utilized due to limitations of free session."
] | [
64,
6,
4,
90,
34,
4,
18,
151,
22,
44,
44,
56,
73,
56,
76,
72,
48,
61,
33
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #en #dataset-databricks/databricks-dolly-15k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n# Model Card for Model ID## Model Details### Model Description\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Andrew Chahnwoo Park\n- Model type: LLaMA\n- Language(s) (NLP): English\n- License: apache-2.0\n- Finetuned from model: TinyLlama/TinyLlama-1.1B-Chat-v1.0### Model Sources\n\n- Repository: TinyLlama/TinyLlama-1.1B-Chat-v1.0\n- GitHub: TinyLlama## Training Details### Training Data\n\nDataBricks Instruction-Tuning Dataset (5% utilized)### Training Procedure\n\n1. Tokenize and label data\n2. Load LLM\n3. Apply Quantized Low-Rank Adaptation (QLoRA) to modules [\"q_proj\",\"k_proj\",\"v_proj\",\"o_proj\"]\n4. Perform training with HuggingFace Trainer\n5. Use DataCollatorForSeq2Seq\n - Note that this was data collator was chosen over the DataCollatorForLanguageModeling as the latter overwrites pre-defined \"labels\"\n - This overwriting is done by the tf_mask_tokens and torch_mask_tokens functions for DataCollatorForLanguageModeling#### Preprocessing\n\nUtilized different instruction prompt templates for each category in the dataset.##### open_qa\n ### Instruction:\n Answer the question below. Be as specific and concise as possible.\n \n ### Question:\n {instruction}\n \n ### Response:\n {response}##### general_qa\n ### Instruction:\n Answer the question below to the best of your konwledge.\n \n ### Question:\n {instruction}\n \n ### Response:\n {response}##### classification\n\n ### Instruction:\n You will be given a question and a list of potential answers to that question. You are to select the correct answers out of the available choices.\n \n ### Question:\n {instruction}\n \n ### Response:\n {response}##### closed_qa\n\n ### Instruction:\n You will be given a question to answer and context that contains pertinent information. Provide a concise and accurate response to the question using the information provided in the context.\n \n ### Question:\n {instruction}\n \n ### Context:\n {context}\n \n ### Response:\n {response}##### brainstorming\n\n ### Instruction:\n You will be given a question that does not have a correct answer. You are to brainstorm one possible answer to the provided question.\n \n ### Question:\n {instruction}\n \n ### Response:\n {response}##### information_extraction\n\n ### Instruction:\n You will be given a question or query and some context that can be used to answer it. You are to extract relevant information from the provided context to provide an accurate response to the given query.\n \n ### Question:\n {instruction}\n \n ### Context:\n {context}\n \n ### Response:\n {response}##### summarization\n\n ### Instruction:\n You will be given a question or request and context that can be used for your response. You are to summarize the provided context to provide an answer to the question.\n \n ### Question:\n {instruction}\n \n ### Context:\n {context}\n \n ### Response:\n {response}##### creative_writing\n\n ### Instruction:\n You will be given a prompt that you are to write about. Be creative.\n \n ### Prompt:\n {instruction}\n \n ### Response:\n {response}\"\"\"#### Labelled Data Format\n\n {\n 'input_ids' : List[int],\n 'attention_mask' : List[int],\n 'labels' : List[int]\n }\n\nWhere labels were created by masking everything but the \"response\" with the mask token (-100)### Hardware\n\nFine-tuning performed on Google Colab on a single session (T4).\nDataset not fully utilized due to limitations of free session."
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_name-finetuned-squad
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 78 | 3.3801 |
| No log | 2.0 | 156 | 2.9967 |
| No log | 3.0 | 234 | 2.9280 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "base_model": "aubmindlab/bert-base-arabertv2", "model-index": [{"name": "model_name-finetuned-squad", "results": []}]} | omarezz/model_name-finetuned-squad | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:13:30+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #question-answering #generated_from_trainer #base_model-aubmindlab/bert-base-arabertv2 #endpoints_compatible #region-us
| model\_name-finetuned-squad
===========================
This model is a fine-tuned version of aubmindlab/bert-base-arabertv2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.9280
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 10
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #question-answering #generated_from_trainer #base_model-aubmindlab/bert-base-arabertv2 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
49,
103,
5,
44
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #question-answering #generated_from_trainer #base_model-aubmindlab/bert-base-arabertv2 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5147
- F1 Score: 0.7679
- Accuracy: 0.7699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5898 | 1.01 | 200 | 0.5640 | 0.7306 | 0.7336 |
| 0.5489 | 2.02 | 400 | 0.5417 | 0.7422 | 0.7443 |
| 0.5352 | 3.03 | 600 | 0.5263 | 0.7522 | 0.7538 |
| 0.5276 | 4.04 | 800 | 0.5279 | 0.7562 | 0.7576 |
| 0.5221 | 5.05 | 1000 | 0.5233 | 0.7606 | 0.7614 |
| 0.5164 | 6.06 | 1200 | 0.5190 | 0.7576 | 0.7592 |
| 0.5115 | 7.07 | 1400 | 0.5254 | 0.7556 | 0.7579 |
| 0.5099 | 8.08 | 1600 | 0.5241 | 0.7487 | 0.7516 |
| 0.505 | 9.09 | 1800 | 0.5134 | 0.7596 | 0.7610 |
| 0.5001 | 10.1 | 2000 | 0.5164 | 0.7553 | 0.7573 |
| 0.495 | 11.11 | 2200 | 0.5267 | 0.7543 | 0.7566 |
| 0.4942 | 12.12 | 2400 | 0.5144 | 0.7605 | 0.7620 |
| 0.4898 | 13.13 | 2600 | 0.5187 | 0.7552 | 0.7585 |
| 0.4888 | 14.14 | 2800 | 0.5149 | 0.7563 | 0.7592 |
| 0.4832 | 15.15 | 3000 | 0.5146 | 0.7586 | 0.7610 |
| 0.4832 | 16.16 | 3200 | 0.5145 | 0.7548 | 0.7579 |
| 0.4795 | 17.17 | 3400 | 0.5196 | 0.7602 | 0.7620 |
| 0.4782 | 18.18 | 3600 | 0.5096 | 0.7612 | 0.7626 |
| 0.4723 | 19.19 | 3800 | 0.5127 | 0.7566 | 0.7585 |
| 0.4661 | 20.2 | 4000 | 0.5137 | 0.7615 | 0.7636 |
| 0.4686 | 21.21 | 4200 | 0.5153 | 0.7540 | 0.7576 |
| 0.4631 | 22.22 | 4400 | 0.5181 | 0.7639 | 0.7655 |
| 0.4572 | 23.23 | 4600 | 0.5282 | 0.7586 | 0.7604 |
| 0.4657 | 24.24 | 4800 | 0.5198 | 0.7531 | 0.7569 |
| 0.4568 | 25.25 | 5000 | 0.5150 | 0.7582 | 0.7592 |
| 0.459 | 26.26 | 5200 | 0.5173 | 0.7583 | 0.7585 |
| 0.4514 | 27.27 | 5400 | 0.5218 | 0.7532 | 0.7563 |
| 0.4525 | 28.28 | 5600 | 0.5156 | 0.7584 | 0.7595 |
| 0.4516 | 29.29 | 5800 | 0.5225 | 0.7556 | 0.7592 |
| 0.444 | 30.3 | 6000 | 0.5216 | 0.7584 | 0.7604 |
| 0.4464 | 31.31 | 6200 | 0.5201 | 0.7618 | 0.7633 |
| 0.4466 | 32.32 | 6400 | 0.5273 | 0.7549 | 0.7579 |
| 0.4416 | 33.33 | 6600 | 0.5285 | 0.7575 | 0.7607 |
| 0.4398 | 34.34 | 6800 | 0.5214 | 0.7587 | 0.7604 |
| 0.4359 | 35.35 | 7000 | 0.5268 | 0.7616 | 0.7633 |
| 0.4401 | 36.36 | 7200 | 0.5264 | 0.7524 | 0.7547 |
| 0.4372 | 37.37 | 7400 | 0.5277 | 0.7555 | 0.7579 |
| 0.4357 | 38.38 | 7600 | 0.5222 | 0.7609 | 0.7620 |
| 0.4321 | 39.39 | 7800 | 0.5293 | 0.7580 | 0.7592 |
| 0.4335 | 40.4 | 8000 | 0.5301 | 0.7584 | 0.7601 |
| 0.4316 | 41.41 | 8200 | 0.5335 | 0.7565 | 0.7598 |
| 0.4344 | 42.42 | 8400 | 0.5316 | 0.7565 | 0.7588 |
| 0.4274 | 43.43 | 8600 | 0.5326 | 0.7546 | 0.7569 |
| 0.4268 | 44.44 | 8800 | 0.5300 | 0.7575 | 0.7595 |
| 0.4267 | 45.45 | 9000 | 0.5297 | 0.7584 | 0.7601 |
| 0.4275 | 46.46 | 9200 | 0.5324 | 0.7602 | 0.7620 |
| 0.429 | 47.47 | 9400 | 0.5347 | 0.7515 | 0.7547 |
| 0.4189 | 48.48 | 9600 | 0.5337 | 0.7569 | 0.7592 |
| 0.4321 | 49.49 | 9800 | 0.5317 | 0.7564 | 0.7588 |
| 0.4227 | 50.51 | 10000 | 0.5316 | 0.7551 | 0.7573 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T02:13:48+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_16384\_512\_56M-L8\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5147
* F1 Score: 0.7679
* Accuracy: 0.7699
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - yuffish/colon-04
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "inference": true, "base_model": "stabilityai/stable-diffusion-2-1-base", "instance_prompt": "a photo of sks object"} | yuffish/colon-04 | null | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-30T02:16:26+00:00 | [] | [] | TAGS
#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# DreamBooth - yuffish/colon-04
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using DreamBooth.
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# DreamBooth - yuffish/colon-04\n\nThis is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# DreamBooth - yuffish/colon-04\n\nThis is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
83,
74,
6,
7,
23,
17
] | [
"TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n# DreamBooth - yuffish/colon-04\n\nThis is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.## Intended uses & limitations#### How to use#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]## Training details\n\n[TODO: describe the data used to train the model]"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tduch/gemma-7b-it-alex-street | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:17:45+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | zinoli/image_text | null | [
"transformers",
"safetensors",
"blip",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:18:40+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #blip #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #blip #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
40,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #blip #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | baaaaaaaam/v1 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:23:30+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Talhat/summarizationTest | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T02:23:37+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
46,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | null |
<!-- WEASEL: AUTO-GENERATED DOCS START (do not remove) -->
# ๐ช Weasel Project: Citations of ECFR Banking Regulation in a spaCy pipeline.
Custom text classification project for spaCy v3 adapted from the spaCy v3
## ๐ project.yml
The [`project.yml`](project.yml) defines the data assets required by the
project, as well as the available commands and workflows. For details, see the
[Weasel documentation](https://github.com/explosion/weasel).
### โฏ Commands
The following commands are defined by the project. They
can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run).
Commands are only re-run if their inputs have changed.
| Command | Description |
| --- | --- |
| `format-script` | Execute the Python script `firstStep-format.py`, which performs the initial formatting of a dataset file for the first step of the project. This script extracts text and labels from a dataset file in JSONL format and writes them to a new JSONL file in a specific format.
Usage:
```
spacy project run execute-first-step-format-script
```
Explanation:
- The script `firstStep-format.py` reads data from the file specified in the `dataset_file` variable (`data/train200.jsonl` by default).
- It extracts text and labels from each JSON object in the dataset file.
- If both text and at least one label are available, it writes a new JSON object to the output file specified in the `output_file` variable (`data/firstStep_file.jsonl` by default) with the extracted text and label.
- If either text or label is missing in a JSON object, a warning message is printed.
- Upon completion, the script prints a message confirming the processing and the path to the output file.
|
| `train-text-classification-model` | Train the text classification model for the second step of the project using the `secondStep-score.py` script. This script loads a blank English spaCy model and adds a text classification pipeline to it. It then trains the model using the processed data from the first step.
Usage:
```
spacy project run train-text-classification-model
```
Explanation:
- The script `secondStep-score.py` loads a blank English spaCy model and adds a text classification pipeline to it.
- It reads processed data from the file specified in the `processed_data_file` variable (`data/firstStep_file.jsonl` by default).
- The processed data is converted to spaCy format for training the model.
- The model is trained using the converted data for a specified number of iterations (`n_iter`).
- Losses are printed for each iteration during training.
- Upon completion, the trained model is saved to the specified output directory (`./my_trained_model` by default).
|
| `classify-unlabeled-data` | Classify the unlabeled data for the third step of the project using the `thirdStep-label.py` script. This script loads the trained spaCy model from the previous step and classifies each record in the unlabeled dataset.
Usage:
```
spacy project run classify-unlabeled-data
```
Explanation:
- The script `thirdStep-label.py` loads the trained spaCy model from the specified model directory (`./my_trained_model` by default).
- It reads the unlabeled data from the file specified in the `unlabeled_data_file` variable (`data/train.jsonl` by default).
- Each record in the unlabeled data is classified using the loaded model.
- The predicted labels for each record are extracted and stored along with the text.
- The classified data is optionally saved to a file specified in the `output_file` variable (`data/thirdStep_file.jsonl` by default).
|
| `format-labeled-data` | Format the labeled data for the final step of the project using the `finalStep-formatLabel.py` script. This script processes the classified data from the third step and transforms it into a specific format, considering a threshold for label acceptance.
Usage:
```
spacy project run format-labeled-data
```
Explanation:
- The script `finalStep-formatLabel.py` reads classified data from the file specified in the `input_file` variable (`data/thirdStep_file.jsonl` by default).
- For each record, it determines accepted categories based on a specified threshold.
- It constructs an output record containing the text, predicted labels, accepted categories, answer (accept/reject), and options with meta information.
- The transformed data is written to the file specified in the `output_file` variable (`data/train4465.jsonl` by default).
|
| `setup-environment` | Set up the Python virtual environment.
|
| `review-evaluation-data` | Review the evaluation data in Prodigy and automatically accept annotations.
Usage:
```
spacy project run review-evaluation-data
```
Explanation:
- The command reviews the evaluation data in Prodigy.
- It automatically accepts annotations made during the review process.
- Only sessions allowed by the environment variable PRODIGY_ALLOWED_SESSIONS are permitted to review data. In this case, the session 'reviwer' is allowed.
|
| `export-reviewed-evaluation-data` | Export the reviewed evaluation data from Prodigy to a JSONL file named 'goldenEval.jsonl'.
Usage:
```
spacy project run export-reviewed-evaluation-data
```
Explanation:
- The command exports the reviewed evaluation data from Prodigy to a JSONL file.
- The data is exported from the Prodigy database associated with the project named 'project3eval-review'.
- The exported data is saved to the file 'goldenEval.jsonl'.
- This command helps in preserving the reviewed annotations for further analysis or processing.
|
| `import-training-data` | Import the training data into Prodigy from a JSONL file named 'train200.jsonl'.
Usage:
```
spacy project run import-training-data
```
Explanation:
- The command imports the training data into Prodigy from the specified JSONL file.
- The data is imported into the Prodigy database associated with the project named 'prodigy3train'.
- This command prepares the training data for annotation and model training in Prodigy.
|
| `import-golden-evaluation-data` | Import the golden evaluation data into Prodigy from a JSONL file named 'goldeneval.jsonl'.
Usage:
```
spacy project run import-golden-evaluation-data
```
Explanation:
- The command imports the golden evaluation data into Prodigy from the specified JSONL file.
- The data is imported into the Prodigy database associated with the project named 'golden3'.
- This command prepares the golden evaluation data for further analysis and model evaluation in Prodigy.
|
| `train-model-experiment1` | Train a text classification model using Prodigy with the 'prodigy3train' dataset and evaluating on 'golden3'.
Usage:
```
spacy project run train-model-experiment1
```
Explanation:
- The command trains a text classification model using Prodigy.
- It uses the 'prodigy3train' dataset for training and evaluates the model on the 'golden3' dataset.
- The trained model is saved to the './output/experiment1' directory.
|
| `download-model` | Download the English language model 'en_core_web_lg' from spaCy.
Usage:
```
spacy project run download-model
```
Explanation:
- The command downloads the English language model 'en_core_web_lg' from spaCy.
- This model is used as the base model for further data processing and training in the project.
|
| `convert-data-to-spacy-format` | Convert the annotated data from Prodigy to spaCy format using the 'prodigy3train' and 'golden3' datasets.
Usage:
```
spacy project run convert-data-to-spacy-format
```
Explanation:
- The command converts the annotated data from Prodigy to spaCy format.
- It uses the 'prodigy3train' and 'golden3' datasets for conversion.
- The converted data is saved to the './corpus' directory with the base model 'en_core_web_lg'.
|
| `train-custom-model` | Train a custom text classification model using spaCy with the converted data in spaCy format.
Usage:
```
spacy project run train-custom-model
```
Explanation:
- The command trains a custom text classification model using spaCy.
- It uses the converted data in spaCy format located in the './corpus' directory.
- The model is trained using the configuration defined in 'corpus/config.cfg'.
|
### โญ Workflows
The following workflows are defined by the project. They
can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run)
and will run the specified commands in order. Commands are only re-run if their
inputs have changed.
| Workflow | Steps |
| --- | --- |
| `all` | `format-script` → `train-text-classification-model` → `classify-unlabeled-data` → `format-labeled-data` → `setup-environment` → `review-evaluation-data` → `export-reviewed-evaluation-data` → `import-training-data` → `import-golden-evaluation-data` → `train-model-experiment1` → `download-model` → `convert-data-to-spacy-format` → `train-custom-model` |
### ๐ Assets
The following assets are defined by the project. They can
be fetched by running [`weasel assets`](https://github.com/explosion/weasel/tree/main/docs/cli.md#open_file_folder-assets)
in the project directory.
| File | Source | Description |
| --- | --- | --- |
| [`corpus/labels/ner.json`](corpus/labels/ner.json) | Local | JSON file containing NER labels |
| [`corpus/labels/parser.json`](corpus/labels/parser.json) | Local | JSON file containing parser labels |
| [`corpus/labels/tagger.json`](corpus/labels/tagger.json) | Local | JSON file containing tagger labels |
| [`corpus/labels/textcat_multilabel.json`](corpus/labels/textcat_multilabel.json) | Local | JSON file containing multilabel text classification labels |
| [`data/eval.jsonl`](data/eval.jsonl) | Local | JSONL file containing evaluation data |
| [`data/firstStep_file.jsonl`](data/firstStep_file.jsonl) | Local | JSONL file containing formatted data from the first step |
| `data/five_examples_annotated5.jsonl` | Local | JSONL file containing five annotated examples |
| [`data/goldenEval.jsonl`](data/goldenEval.jsonl) | Local | JSONL file containing golden evaluation data |
| [`data/thirdStep_file.jsonl`](data/thirdStep_file.jsonl) | Local | JSONL file containing classified data from the third step |
| [`data/train.jsonl`](data/train.jsonl) | Local | JSONL file containing training data |
| [`data/train200.jsonl`](data/train200.jsonl) | Local | JSONL file containing initial training data |
| [`data/train4465.jsonl`](data/train4465.jsonl) | Local | JSONL file containing formatted and labeled training data |
| [`my_trained_model/textcat_multilabel/cfg`](my_trained_model/textcat_multilabel/cfg) | Local | Configuration files for the text classification model |
| [`my_trained_model/textcat_multilabel/model`](my_trained_model/textcat_multilabel/model) | Local | Trained model files for the text classification model |
| [`my_trained_model/vocab/key2row`](my_trained_model/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary |
| [`my_trained_model/vocab/lookups.bin`](my_trained_model/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary |
| [`my_trained_model/vocab/strings.json`](my_trained_model/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary |
| [`my_trained_model/vocab/vectors`](my_trained_model/vocab/vectors) | Local | Directory containing vector files for the vocabulary |
| [`my_trained_model/vocab/vectors.cfg`](my_trained_model/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary |
| [`my_trained_model/config.cfg`](my_trained_model/config.cfg) | Local | Configuration file for the trained model |
| [`my_trained_model/meta.json`](my_trained_model/meta.json) | Local | JSON file containing metadata for the trained model |
| [`my_trained_model/tokenizer`](my_trained_model/tokenizer) | Local | Tokenizer files for the trained model |
| [`output/experiment1/model-best/textcat_multilabel/cfg`](output/experiment1/model-best/textcat_multilabel/cfg) | Local | Configuration files for the best model in experiment 1 |
| [`output/experiment1/model-best/textcat_multilabel/model`](output/experiment1/model-best/textcat_multilabel/model) | Local | Trained model files for the best model in experiment 1 |
| [`output/experiment1/model-best/vocab/key2row`](output/experiment1/model-best/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the best model in experiment 1 |
| [`output/experiment1/model-best/vocab/lookups.bin`](output/experiment1/model-best/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the best model in experiment 1 |
| [`output/experiment1/model-best/vocab/strings.json`](output/experiment1/model-best/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the best model in experiment 1 |
| [`output/experiment1/model-best/vocab/vectors`](output/experiment1/model-best/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the best model in experiment 1 |
| [`output/experiment1/model-best/vocab/vectors.cfg`](output/experiment1/model-best/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the best model in experiment 1 |
| [`output/experiment1/model-best/config.cfg`](output/experiment1/model-best/config.cfg) | Local | Configuration file for the best model in experiment 1 |
| [`output/experiment1/model-best/meta.json`](output/experiment1/model-best/meta.json) | Local | JSON file containing metadata for the best model in experiment 1 |
| [`output/experiment1/model-best/tokenizer`](output/experiment1/model-best/tokenizer) | Local | Tokenizer files for the best model in experiment 1 |
| [`output/experiment1/model-last/textcat_multilabel/cfg`](output/experiment1/model-last/textcat_multilabel/cfg) | Local | Configuration files for the last model in experiment 1 |
| [`output/experiment1/model-last/textcat_multilabel/model`](output/experiment1/model-last/textcat_multilabel/model) | Local | Trained model files for the last model in experiment 1 |
| [`output/experiment1/model-last/vocab/key2row`](output/experiment1/model-last/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the last model in experiment 1 |
| [`output/experiment1/model-last/vocab/lookups.bin`](output/experiment1/model-last/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the last model in experiment 1 |
| [`output/experiment1/model-last/vocab/strings.json`](output/experiment1/model-last/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the last model in experiment 1 |
| [`output/experiment1/model-last/vocab/vectors`](output/experiment1/model-last/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the last model in experiment 1 |
| [`output/experiment1/model-last/vocab/vectors.cfg`](output/experiment1/model-last/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the last model in experiment 1 |
| [`output/experiment1/model-last/config.cfg`](output/experiment1/model-last/config.cfg) | Local | Configuration file for the last model in experiment 1 |
| [`output/experiment1/model-last/meta.json`](output/experiment1/model-last/meta.json) | Local | JSON file containing metadata for the last model in experiment 1 |
| [`output/experiment1/model-last/tokenizer`](output/experiment1/model-last/tokenizer) | Local | Tokenizer files for the last model in experiment 1 |
| [`output/experiment3/model-best/textcat_multilabel/cfg`](output/experiment3/model-best/textcat_multilabel/cfg) | Local | Configuration files for the best model in experiment 3 |
| [`output/experiment3/model-best/textcat_multilabel/model`](output/experiment3/model-best/textcat_multilabel/model) | Local | Trained model files for the best model in experiment 3 |
| [`output/experiment3/model-best/vocab/key2row`](output/experiment3/model-best/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the best model in experiment 3 |
| [`output/experiment3/model-best/vocab/lookups.bin`](output/experiment3/model-best/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the best model in experiment 3 |
| [`output/experiment3/model-best/vocab/strings.json`](output/experiment3/model-best/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the best model in experiment 3 |
| [`output/experiment3/model-best/vocab/vectors`](output/experiment3/model-best/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the best model in experiment 3 |
| [`output/experiment3/model-best/vocab/vectors.cfg`](output/experiment3/model-best/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the best model in experiment 3 |
| [`output/experiment3/model-best/config.cfg`](output/experiment3/model-best/config.cfg) | Local | Configuration file for the best model in experiment 3 |
| [`output/experiment3/model-best/meta.json`](output/experiment3/model-best/meta.json) | Local | JSON file containing metadata for the best model in experiment 3 |
| [`output/experiment3/model-best/tokenizer`](output/experiment3/model-best/tokenizer) | Local | Tokenizer files for the best model in experiment 3 |
| [`output/experiment3/model-last/textcat_multilabel/cfg`](output/experiment3/model-last/textcat_multilabel/cfg) | Local | Configuration files for the last model in experiment 3 |
| [`output/experiment3/model-last/textcat_multilabel/model`](output/experiment3/model-last/textcat_multilabel/model) | Local | Trained model files for the last model in experiment 3 |
| [`output/experiment3/model-last/vocab/key2row`](output/experiment3/model-last/vocab/key2row) | Local | Mapping from keys to row indices in the vocabulary for the last model in experiment 3 |
| [`output/experiment3/model-last/vocab/lookups.bin`](output/experiment3/model-last/vocab/lookups.bin) | Local | Binary lookups file for the vocabulary for the last model in experiment 3 |
| [`output/experiment3/model-last/vocab/strings.json`](output/experiment3/model-last/vocab/strings.json) | Local | JSON file containing string representations of the vocabulary for the last model in experiment 3 |
| [`output/experiment3/model-last/vocab/vectors`](output/experiment3/model-last/vocab/vectors) | Local | Directory containing vector files for the vocabulary for the last model in experiment 3 |
| [`output/experiment3/model-last/vocab/vectors.cfg`](output/experiment3/model-last/vocab/vectors.cfg) | Local | Configuration file for vectors in the vocabulary for the last model in experiment 3 |
| [`output/experiment3/model-last/config.cfg`](output/experiment3/model-last/config.cfg) | Local | Configuration file for the last model in experiment 3 |
| [`output/experiment3/model-last/meta.json`](output/experiment3/model-last/meta.json) | Local | JSON file containing metadata for the last model in experiment 3 |
| [`output/experiment3/model-last/tokenizer`](output/experiment3/model-last/tokenizer) | Local | Tokenizer files for the last model in experiment 3 |
| [`python_Code/finalStep-formatLabel.py`](python_Code/finalStep-formatLabel.py) | Local | Python script for formatting labeled data in the final step |
| [`python_Code/firstStep-format.py`](python_Code/firstStep-format.py) | Local | Python script for formatting data in the first step |
| [`python_Code/five_examples_annotated.ipynb`](python_Code/five_examples_annotated.ipynb) | Local | Jupyter notebook containing five annotated examples |
| [`python_Code/secondStep-score.py`](python_Code/secondStep-score.py) | Local | Python script for scoring data in the second step |
| [`python_Code/thirdStep-label.py`](python_Code/thirdStep-label.py) | Local | Python script for labeling data in the third step |
| [`python_Code/train_eval_split.ipynb`](python_Code/train_eval_split.ipynb) | Local | Jupyter notebook for training and evaluation data splitting |
| [`TerminalCode.txt`](TerminalCode.txt) | Local | Text file containing terminal code |
| [`README.md`](README.md) | Local | Markdown file containing project documentation |
| [`prodigy.json`](prodigy.json) | Local | JSON file containing Prodigy configuration |
<!-- WEASEL: AUTO-GENERATED DOCS END (do not remove) -->
| {"language": "en", "tags": ["machine learning", "natural language processing", "huggingface"]} | DagimB/ecfr-textcat | null | [
"machine learning",
"natural language processing",
"huggingface",
"en",
"region:us"
] | null | 2024-04-30T02:24:02+00:00 | [] | [
"en"
] | TAGS
#machine learning #natural language processing #huggingface #en #region-us
| Weasel Project: Citations of ECFR Banking Regulation in a spaCy pipeline.
=========================================================================
Custom text classification project for spaCy v3 adapted from the spaCy v3
URL
---
The 'URL' defines the data assets required by the
project, as well as the available commands and workflows. For details, see the
Weasel documentation.
### โฏ Commands
The following commands are defined by the project. They
can be executed using ['weasel run [name]'](URL
Commands are only re-run if their inputs have changed.
Usage:
Explanation:
* The script 'URL' reads data from the file specified in the 'dataset\_file' variable ('data/URL' by default).
* It extracts text and labels from each JSON object in the dataset file.
* If both text and at least one label are available, it writes a new JSON object to the output file specified in the 'output\_file' variable ('data/firstStep\_file.jsonl' by default) with the extracted text and label.
* If either text or label is missing in a JSON object, a warning message is printed.
* Upon completion, the script prints a message confirming the processing and the path to the output file.
|
| 'train-text-classification-model' | Train the text classification model for the second step of the project using the 'URL' script. This script loads a blank English spaCy model and adds a text classification pipeline to it. It then trains the model using the processed data from the first step.
Usage:
Explanation:
* The script 'URL' loads a blank English spaCy model and adds a text classification pipeline to it.
* It reads processed data from the file specified in the 'processed\_data\_file' variable ('data/firstStep\_file.jsonl' by default).
* The processed data is converted to spaCy format for training the model.
* The model is trained using the converted data for a specified number of iterations ('n\_iter').
* Losses are printed for each iteration during training.
* Upon completion, the trained model is saved to the specified output directory ('./my\_trained\_model' by default).
|
| 'classify-unlabeled-data' | Classify the unlabeled data for the third step of the project using the 'URL' script. This script loads the trained spaCy model from the previous step and classifies each record in the unlabeled dataset.
Usage:
Explanation:
* The script 'URL' loads the trained spaCy model from the specified model directory ('./my\_trained\_model' by default).
* It reads the unlabeled data from the file specified in the 'unlabeled\_data\_file' variable ('data/URL' by default).
* Each record in the unlabeled data is classified using the loaded model.
* The predicted labels for each record are extracted and stored along with the text.
* The classified data is optionally saved to a file specified in the 'output\_file' variable ('data/thirdStep\_file.jsonl' by default).
|
| 'format-labeled-data' | Format the labeled data for the final step of the project using the 'URL' script. This script processes the classified data from the third step and transforms it into a specific format, considering a threshold for label acceptance.
Usage:
Explanation:
* The script 'URL' reads classified data from the file specified in the 'input\_file' variable ('data/thirdStep\_file.jsonl' by default).
* For each record, it determines accepted categories based on a specified threshold.
* It constructs an output record containing the text, predicted labels, accepted categories, answer (accept/reject), and options with meta information.
* The transformed data is written to the file specified in the 'output\_file' variable ('data/URL' by default).
|
| 'setup-environment' | Set up the Python virtual environment.
|
| 'review-evaluation-data' | Review the evaluation data in Prodigy and automatically accept annotations.
Usage:
Explanation:
* The command reviews the evaluation data in Prodigy.
* It automatically accepts annotations made during the review process.
* Only sessions allowed by the environment variable PRODIGY\_ALLOWED\_SESSIONS are permitted to review data. In this case, the session 'reviwer' is allowed.
|
| 'export-reviewed-evaluation-data' | Export the reviewed evaluation data from Prodigy to a JSONL file named 'URL'.
Usage:
Explanation:
* The command exports the reviewed evaluation data from Prodigy to a JSONL file.
* The data is exported from the Prodigy database associated with the project named 'project3eval-review'.
* The exported data is saved to the file 'URL'.
* This command helps in preserving the reviewed annotations for further analysis or processing.
|
| 'import-training-data' | Import the training data into Prodigy from a JSONL file named 'URL'.
Usage:
Explanation:
* The command imports the training data into Prodigy from the specified JSONL file.
* The data is imported into the Prodigy database associated with the project named 'prodigy3train'.
* This command prepares the training data for annotation and model training in Prodigy.
|
| 'import-golden-evaluation-data' | Import the golden evaluation data into Prodigy from a JSONL file named 'URL'.
Usage:
Explanation:
* The command imports the golden evaluation data into Prodigy from the specified JSONL file.
* The data is imported into the Prodigy database associated with the project named 'golden3'.
* This command prepares the golden evaluation data for further analysis and model evaluation in Prodigy.
|
| 'train-model-experiment1' | Train a text classification model using Prodigy with the 'prodigy3train' dataset and evaluating on 'golden3'.
Usage:
Explanation:
* The command trains a text classification model using Prodigy.
* It uses the 'prodigy3train' dataset for training and evaluates the model on the 'golden3' dataset.
* The trained model is saved to the './output/experiment1' directory.
|
| 'download-model' | Download the English language model 'en\_core\_web\_lg' from spaCy.
Usage:
Explanation:
* The command downloads the English language model 'en\_core\_web\_lg' from spaCy.
* This model is used as the base model for further data processing and training in the project.
|
| 'convert-data-to-spacy-format' | Convert the annotated data from Prodigy to spaCy format using the 'prodigy3train' and 'golden3' datasets.
Usage:
Explanation:
* The command converts the annotated data from Prodigy to spaCy format.
* It uses the 'prodigy3train' and 'golden3' datasets for conversion.
* The converted data is saved to the './corpus' directory with the base model 'en\_core\_web\_lg'.
|
| 'train-custom-model' | Train a custom text classification model using spaCy with the converted data in spaCy format.
Usage:
Explanation:
* The command trains a custom text classification model using spaCy.
* It uses the converted data in spaCy format located in the './corpus' directory.
* The model is trained using the configuration defined in 'corpus/URL'.
|
### โญ Workflows
The following workflows are defined by the project. They
can be executed using ['weasel run [name]'](URL
and will run the specified commands in order. Commands are only re-run if their
inputs have changed.
### Assets
The following assets are defined by the project. They can
be fetched by running 'weasel assets'
in the project directory.
File: 'corpus/labels/URL', Source: Local, Description: JSON file containing NER labels
File: 'corpus/labels/URL', Source: Local, Description: JSON file containing parser labels
File: 'corpus/labels/URL', Source: Local, Description: JSON file containing tagger labels
File: 'corpus/labels/textcat\_multilabel.json', Source: Local, Description: JSON file containing multilabel text classification labels
File: 'data/URL', Source: Local, Description: JSONL file containing evaluation data
File: 'data/firstStep\_file.jsonl', Source: Local, Description: JSONL file containing formatted data from the first step
File: 'data/five\_examples\_annotated5.jsonl', Source: Local, Description: JSONL file containing five annotated examples
File: 'data/URL', Source: Local, Description: JSONL file containing golden evaluation data
File: 'data/thirdStep\_file.jsonl', Source: Local, Description: JSONL file containing classified data from the third step
File: 'data/URL', Source: Local, Description: JSONL file containing training data
File: 'data/URL', Source: Local, Description: JSONL file containing initial training data
File: 'data/URL', Source: Local, Description: JSONL file containing formatted and labeled training data
File: 'my\_trained\_model/textcat\_multilabel/cfg', Source: Local, Description: Configuration files for the text classification model
File: 'my\_trained\_model/textcat\_multilabel/model', Source: Local, Description: Trained model files for the text classification model
File: 'my\_trained\_model/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary
File: 'my\_trained\_model/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary
File: 'my\_trained\_model/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary
File: 'my\_trained\_model/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary
File: 'my\_trained\_model/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary
File: 'my\_trained\_model/URL', Source: Local, Description: Configuration file for the trained model
File: 'my\_trained\_model/URL', Source: Local, Description: JSON file containing metadata for the trained model
File: 'my\_trained\_model/tokenizer', Source: Local, Description: Tokenizer files for the trained model
File: 'output/experiment1/model-best/textcat\_multilabel/cfg', Source: Local, Description: Configuration files for the best model in experiment 1
File: 'output/experiment1/model-best/textcat\_multilabel/model', Source: Local, Description: Trained model files for the best model in experiment 1
File: 'output/experiment1/model-best/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the best model in experiment 1
File: 'output/experiment1/model-best/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the best model in experiment 1
File: 'output/experiment1/model-best/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the best model in experiment 1
File: 'output/experiment1/model-best/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the best model in experiment 1
File: 'output/experiment1/model-best/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the best model in experiment 1
File: 'output/experiment1/model-best/URL', Source: Local, Description: Configuration file for the best model in experiment 1
File: 'output/experiment1/model-best/URL', Source: Local, Description: JSON file containing metadata for the best model in experiment 1
File: 'output/experiment1/model-best/tokenizer', Source: Local, Description: Tokenizer files for the best model in experiment 1
File: 'output/experiment1/model-last/textcat\_multilabel/cfg', Source: Local, Description: Configuration files for the last model in experiment 1
File: 'output/experiment1/model-last/textcat\_multilabel/model', Source: Local, Description: Trained model files for the last model in experiment 1
File: 'output/experiment1/model-last/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the last model in experiment 1
File: 'output/experiment1/model-last/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the last model in experiment 1
File: 'output/experiment1/model-last/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the last model in experiment 1
File: 'output/experiment1/model-last/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the last model in experiment 1
File: 'output/experiment1/model-last/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the last model in experiment 1
File: 'output/experiment1/model-last/URL', Source: Local, Description: Configuration file for the last model in experiment 1
File: 'output/experiment1/model-last/URL', Source: Local, Description: JSON file containing metadata for the last model in experiment 1
File: 'output/experiment1/model-last/tokenizer', Source: Local, Description: Tokenizer files for the last model in experiment 1
File: 'output/experiment3/model-best/textcat\_multilabel/cfg', Source: Local, Description: Configuration files for the best model in experiment 3
File: 'output/experiment3/model-best/textcat\_multilabel/model', Source: Local, Description: Trained model files for the best model in experiment 3
File: 'output/experiment3/model-best/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the best model in experiment 3
File: 'output/experiment3/model-best/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the best model in experiment 3
File: 'output/experiment3/model-best/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the best model in experiment 3
File: 'output/experiment3/model-best/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the best model in experiment 3
File: 'output/experiment3/model-best/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the best model in experiment 3
File: 'output/experiment3/model-best/URL', Source: Local, Description: Configuration file for the best model in experiment 3
File: 'output/experiment3/model-best/URL', Source: Local, Description: JSON file containing metadata for the best model in experiment 3
File: 'output/experiment3/model-best/tokenizer', Source: Local, Description: Tokenizer files for the best model in experiment 3
File: 'output/experiment3/model-last/textcat\_multilabel/cfg', Source: Local, Description: Configuration files for the last model in experiment 3
File: 'output/experiment3/model-last/textcat\_multilabel/model', Source: Local, Description: Trained model files for the last model in experiment 3
File: 'output/experiment3/model-last/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the last model in experiment 3
File: 'output/experiment3/model-last/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the last model in experiment 3
File: 'output/experiment3/model-last/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the last model in experiment 3
File: 'output/experiment3/model-last/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the last model in experiment 3
File: 'output/experiment3/model-last/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the last model in experiment 3
File: 'output/experiment3/model-last/URL', Source: Local, Description: Configuration file for the last model in experiment 3
File: 'output/experiment3/model-last/URL', Source: Local, Description: JSON file containing metadata for the last model in experiment 3
File: 'output/experiment3/model-last/tokenizer', Source: Local, Description: Tokenizer files for the last model in experiment 3
File: 'python\_Code/URL', Source: Local, Description: Python script for formatting labeled data in the final step
File: 'python\_Code/URL', Source: Local, Description: Python script for formatting data in the first step
File: 'python\_Code/five\_examples\_annotated.ipynb', Source: Local, Description: Jupyter notebook containing five annotated examples
File: 'python\_Code/URL', Source: Local, Description: Python script for scoring data in the second step
File: 'python\_Code/URL', Source: Local, Description: Python script for labeling data in the third step
File: 'python\_Code/train\_eval\_split.ipynb', Source: Local, Description: Jupyter notebook for training and evaluation data splitting
File: 'URL', Source: Local, Description: Text file containing terminal code
File: 'URL', Source: Local, Description: Markdown file containing project documentation
File: 'URL', Source: Local, Description: JSON file containing Prodigy configuration
| [
"### โฏ Commands\n\n\nThe following commands are defined by the project. They\ncan be executed using ['weasel run [name]'](URL\nCommands are only re-run if their inputs have changed.\n\n\n\nUsage:\n\n\nExplanation:\n\n\n* The script 'URL' reads data from the file specified in the 'dataset\\_file' variable ('data/URL' by default).\n* It extracts text and labels from each JSON object in the dataset file.\n* If both text and at least one label are available, it writes a new JSON object to the output file specified in the 'output\\_file' variable ('data/firstStep\\_file.jsonl' by default) with the extracted text and label.\n* If either text or label is missing in a JSON object, a warning message is printed.\n* Upon completion, the script prints a message confirming the processing and the path to the output file.\n|\n| 'train-text-classification-model' | Train the text classification model for the second step of the project using the 'URL' script. This script loads a blank English spaCy model and adds a text classification pipeline to it. It then trains the model using the processed data from the first step.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The script 'URL' loads a blank English spaCy model and adds a text classification pipeline to it.\n* It reads processed data from the file specified in the 'processed\\_data\\_file' variable ('data/firstStep\\_file.jsonl' by default).\n* The processed data is converted to spaCy format for training the model.\n* The model is trained using the converted data for a specified number of iterations ('n\\_iter').\n* Losses are printed for each iteration during training.\n* Upon completion, the trained model is saved to the specified output directory ('./my\\_trained\\_model' by default).\n|\n| 'classify-unlabeled-data' | Classify the unlabeled data for the third step of the project using the 'URL' script. This script loads the trained spaCy model from the previous step and classifies each record in the unlabeled dataset.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The script 'URL' loads the trained spaCy model from the specified model directory ('./my\\_trained\\_model' by default).\n* It reads the unlabeled data from the file specified in the 'unlabeled\\_data\\_file' variable ('data/URL' by default).\n* Each record in the unlabeled data is classified using the loaded model.\n* The predicted labels for each record are extracted and stored along with the text.\n* The classified data is optionally saved to a file specified in the 'output\\_file' variable ('data/thirdStep\\_file.jsonl' by default).\n|\n| 'format-labeled-data' | Format the labeled data for the final step of the project using the 'URL' script. This script processes the classified data from the third step and transforms it into a specific format, considering a threshold for label acceptance.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The script 'URL' reads classified data from the file specified in the 'input\\_file' variable ('data/thirdStep\\_file.jsonl' by default).\n* For each record, it determines accepted categories based on a specified threshold.\n* It constructs an output record containing the text, predicted labels, accepted categories, answer (accept/reject), and options with meta information.\n* The transformed data is written to the file specified in the 'output\\_file' variable ('data/URL' by default).\n|\n| 'setup-environment' | Set up the Python virtual environment.\n|\n| 'review-evaluation-data' | Review the evaluation data in Prodigy and automatically accept annotations.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command reviews the evaluation data in Prodigy.\n* It automatically accepts annotations made during the review process.\n* Only sessions allowed by the environment variable PRODIGY\\_ALLOWED\\_SESSIONS are permitted to review data. In this case, the session 'reviwer' is allowed.\n|\n| 'export-reviewed-evaluation-data' | Export the reviewed evaluation data from Prodigy to a JSONL file named 'URL'.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command exports the reviewed evaluation data from Prodigy to a JSONL file.\n* The data is exported from the Prodigy database associated with the project named 'project3eval-review'.\n* The exported data is saved to the file 'URL'.\n* This command helps in preserving the reviewed annotations for further analysis or processing.\n|\n| 'import-training-data' | Import the training data into Prodigy from a JSONL file named 'URL'.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command imports the training data into Prodigy from the specified JSONL file.\n* The data is imported into the Prodigy database associated with the project named 'prodigy3train'.\n* This command prepares the training data for annotation and model training in Prodigy.\n|\n| 'import-golden-evaluation-data' | Import the golden evaluation data into Prodigy from a JSONL file named 'URL'.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command imports the golden evaluation data into Prodigy from the specified JSONL file.\n* The data is imported into the Prodigy database associated with the project named 'golden3'.\n* This command prepares the golden evaluation data for further analysis and model evaluation in Prodigy.\n|\n| 'train-model-experiment1' | Train a text classification model using Prodigy with the 'prodigy3train' dataset and evaluating on 'golden3'.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command trains a text classification model using Prodigy.\n* It uses the 'prodigy3train' dataset for training and evaluates the model on the 'golden3' dataset.\n* The trained model is saved to the './output/experiment1' directory.\n|\n| 'download-model' | Download the English language model 'en\\_core\\_web\\_lg' from spaCy.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command downloads the English language model 'en\\_core\\_web\\_lg' from spaCy.\n* This model is used as the base model for further data processing and training in the project.\n|\n| 'convert-data-to-spacy-format' | Convert the annotated data from Prodigy to spaCy format using the 'prodigy3train' and 'golden3' datasets.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command converts the annotated data from Prodigy to spaCy format.\n* It uses the 'prodigy3train' and 'golden3' datasets for conversion.\n* The converted data is saved to the './corpus' directory with the base model 'en\\_core\\_web\\_lg'.\n|\n| 'train-custom-model' | Train a custom text classification model using spaCy with the converted data in spaCy format.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command trains a custom text classification model using spaCy.\n* It uses the converted data in spaCy format located in the './corpus' directory.\n* The model is trained using the configuration defined in 'corpus/URL'.\n|",
"### โญ Workflows\n\n\nThe following workflows are defined by the project. They\ncan be executed using ['weasel run [name]'](URL\nand will run the specified commands in order. Commands are only re-run if their\ninputs have changed.",
"### Assets\n\n\nThe following assets are defined by the project. They can\nbe fetched by running 'weasel assets'\nin the project directory.\n\n\nFile: 'corpus/labels/URL', Source: Local, Description: JSON file containing NER labels\nFile: 'corpus/labels/URL', Source: Local, Description: JSON file containing parser labels\nFile: 'corpus/labels/URL', Source: Local, Description: JSON file containing tagger labels\nFile: 'corpus/labels/textcat\\_multilabel.json', Source: Local, Description: JSON file containing multilabel text classification labels\nFile: 'data/URL', Source: Local, Description: JSONL file containing evaluation data\nFile: 'data/firstStep\\_file.jsonl', Source: Local, Description: JSONL file containing formatted data from the first step\nFile: 'data/five\\_examples\\_annotated5.jsonl', Source: Local, Description: JSONL file containing five annotated examples\nFile: 'data/URL', Source: Local, Description: JSONL file containing golden evaluation data\nFile: 'data/thirdStep\\_file.jsonl', Source: Local, Description: JSONL file containing classified data from the third step\nFile: 'data/URL', Source: Local, Description: JSONL file containing training data\nFile: 'data/URL', Source: Local, Description: JSONL file containing initial training data\nFile: 'data/URL', Source: Local, Description: JSONL file containing formatted and labeled training data\nFile: 'my\\_trained\\_model/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the text classification model\nFile: 'my\\_trained\\_model/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the text classification model\nFile: 'my\\_trained\\_model/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary\nFile: 'my\\_trained\\_model/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary\nFile: 'my\\_trained\\_model/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary\nFile: 'my\\_trained\\_model/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary\nFile: 'my\\_trained\\_model/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary\nFile: 'my\\_trained\\_model/URL', Source: Local, Description: Configuration file for the trained model\nFile: 'my\\_trained\\_model/URL', Source: Local, Description: JSON file containing metadata for the trained model\nFile: 'my\\_trained\\_model/tokenizer', Source: Local, Description: Tokenizer files for the trained model\nFile: 'output/experiment1/model-best/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the best model in experiment 1\nFile: 'output/experiment1/model-best/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/URL', Source: Local, Description: Configuration file for the best model in experiment 1\nFile: 'output/experiment1/model-best/URL', Source: Local, Description: JSON file containing metadata for the best model in experiment 1\nFile: 'output/experiment1/model-best/tokenizer', Source: Local, Description: Tokenizer files for the best model in experiment 1\nFile: 'output/experiment1/model-last/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the last model in experiment 1\nFile: 'output/experiment1/model-last/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/URL', Source: Local, Description: Configuration file for the last model in experiment 1\nFile: 'output/experiment1/model-last/URL', Source: Local, Description: JSON file containing metadata for the last model in experiment 1\nFile: 'output/experiment1/model-last/tokenizer', Source: Local, Description: Tokenizer files for the last model in experiment 1\nFile: 'output/experiment3/model-best/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the best model in experiment 3\nFile: 'output/experiment3/model-best/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/URL', Source: Local, Description: Configuration file for the best model in experiment 3\nFile: 'output/experiment3/model-best/URL', Source: Local, Description: JSON file containing metadata for the best model in experiment 3\nFile: 'output/experiment3/model-best/tokenizer', Source: Local, Description: Tokenizer files for the best model in experiment 3\nFile: 'output/experiment3/model-last/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the last model in experiment 3\nFile: 'output/experiment3/model-last/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/URL', Source: Local, Description: Configuration file for the last model in experiment 3\nFile: 'output/experiment3/model-last/URL', Source: Local, Description: JSON file containing metadata for the last model in experiment 3\nFile: 'output/experiment3/model-last/tokenizer', Source: Local, Description: Tokenizer files for the last model in experiment 3\nFile: 'python\\_Code/URL', Source: Local, Description: Python script for formatting labeled data in the final step\nFile: 'python\\_Code/URL', Source: Local, Description: Python script for formatting data in the first step\nFile: 'python\\_Code/five\\_examples\\_annotated.ipynb', Source: Local, Description: Jupyter notebook containing five annotated examples\nFile: 'python\\_Code/URL', Source: Local, Description: Python script for scoring data in the second step\nFile: 'python\\_Code/URL', Source: Local, Description: Python script for labeling data in the third step\nFile: 'python\\_Code/train\\_eval\\_split.ipynb', Source: Local, Description: Jupyter notebook for training and evaluation data splitting\nFile: 'URL', Source: Local, Description: Text file containing terminal code\nFile: 'URL', Source: Local, Description: Markdown file containing project documentation\nFile: 'URL', Source: Local, Description: JSON file containing Prodigy configuration"
] | [
"TAGS\n#machine learning #natural language processing #huggingface #en #region-us \n",
"### โฏ Commands\n\n\nThe following commands are defined by the project. They\ncan be executed using ['weasel run [name]'](URL\nCommands are only re-run if their inputs have changed.\n\n\n\nUsage:\n\n\nExplanation:\n\n\n* The script 'URL' reads data from the file specified in the 'dataset\\_file' variable ('data/URL' by default).\n* It extracts text and labels from each JSON object in the dataset file.\n* If both text and at least one label are available, it writes a new JSON object to the output file specified in the 'output\\_file' variable ('data/firstStep\\_file.jsonl' by default) with the extracted text and label.\n* If either text or label is missing in a JSON object, a warning message is printed.\n* Upon completion, the script prints a message confirming the processing and the path to the output file.\n|\n| 'train-text-classification-model' | Train the text classification model for the second step of the project using the 'URL' script. This script loads a blank English spaCy model and adds a text classification pipeline to it. It then trains the model using the processed data from the first step.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The script 'URL' loads a blank English spaCy model and adds a text classification pipeline to it.\n* It reads processed data from the file specified in the 'processed\\_data\\_file' variable ('data/firstStep\\_file.jsonl' by default).\n* The processed data is converted to spaCy format for training the model.\n* The model is trained using the converted data for a specified number of iterations ('n\\_iter').\n* Losses are printed for each iteration during training.\n* Upon completion, the trained model is saved to the specified output directory ('./my\\_trained\\_model' by default).\n|\n| 'classify-unlabeled-data' | Classify the unlabeled data for the third step of the project using the 'URL' script. This script loads the trained spaCy model from the previous step and classifies each record in the unlabeled dataset.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The script 'URL' loads the trained spaCy model from the specified model directory ('./my\\_trained\\_model' by default).\n* It reads the unlabeled data from the file specified in the 'unlabeled\\_data\\_file' variable ('data/URL' by default).\n* Each record in the unlabeled data is classified using the loaded model.\n* The predicted labels for each record are extracted and stored along with the text.\n* The classified data is optionally saved to a file specified in the 'output\\_file' variable ('data/thirdStep\\_file.jsonl' by default).\n|\n| 'format-labeled-data' | Format the labeled data for the final step of the project using the 'URL' script. This script processes the classified data from the third step and transforms it into a specific format, considering a threshold for label acceptance.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The script 'URL' reads classified data from the file specified in the 'input\\_file' variable ('data/thirdStep\\_file.jsonl' by default).\n* For each record, it determines accepted categories based on a specified threshold.\n* It constructs an output record containing the text, predicted labels, accepted categories, answer (accept/reject), and options with meta information.\n* The transformed data is written to the file specified in the 'output\\_file' variable ('data/URL' by default).\n|\n| 'setup-environment' | Set up the Python virtual environment.\n|\n| 'review-evaluation-data' | Review the evaluation data in Prodigy and automatically accept annotations.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command reviews the evaluation data in Prodigy.\n* It automatically accepts annotations made during the review process.\n* Only sessions allowed by the environment variable PRODIGY\\_ALLOWED\\_SESSIONS are permitted to review data. In this case, the session 'reviwer' is allowed.\n|\n| 'export-reviewed-evaluation-data' | Export the reviewed evaluation data from Prodigy to a JSONL file named 'URL'.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command exports the reviewed evaluation data from Prodigy to a JSONL file.\n* The data is exported from the Prodigy database associated with the project named 'project3eval-review'.\n* The exported data is saved to the file 'URL'.\n* This command helps in preserving the reviewed annotations for further analysis or processing.\n|\n| 'import-training-data' | Import the training data into Prodigy from a JSONL file named 'URL'.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command imports the training data into Prodigy from the specified JSONL file.\n* The data is imported into the Prodigy database associated with the project named 'prodigy3train'.\n* This command prepares the training data for annotation and model training in Prodigy.\n|\n| 'import-golden-evaluation-data' | Import the golden evaluation data into Prodigy from a JSONL file named 'URL'.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command imports the golden evaluation data into Prodigy from the specified JSONL file.\n* The data is imported into the Prodigy database associated with the project named 'golden3'.\n* This command prepares the golden evaluation data for further analysis and model evaluation in Prodigy.\n|\n| 'train-model-experiment1' | Train a text classification model using Prodigy with the 'prodigy3train' dataset and evaluating on 'golden3'.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command trains a text classification model using Prodigy.\n* It uses the 'prodigy3train' dataset for training and evaluates the model on the 'golden3' dataset.\n* The trained model is saved to the './output/experiment1' directory.\n|\n| 'download-model' | Download the English language model 'en\\_core\\_web\\_lg' from spaCy.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command downloads the English language model 'en\\_core\\_web\\_lg' from spaCy.\n* This model is used as the base model for further data processing and training in the project.\n|\n| 'convert-data-to-spacy-format' | Convert the annotated data from Prodigy to spaCy format using the 'prodigy3train' and 'golden3' datasets.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command converts the annotated data from Prodigy to spaCy format.\n* It uses the 'prodigy3train' and 'golden3' datasets for conversion.\n* The converted data is saved to the './corpus' directory with the base model 'en\\_core\\_web\\_lg'.\n|\n| 'train-custom-model' | Train a custom text classification model using spaCy with the converted data in spaCy format.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command trains a custom text classification model using spaCy.\n* It uses the converted data in spaCy format located in the './corpus' directory.\n* The model is trained using the configuration defined in 'corpus/URL'.\n|",
"### โญ Workflows\n\n\nThe following workflows are defined by the project. They\ncan be executed using ['weasel run [name]'](URL\nand will run the specified commands in order. Commands are only re-run if their\ninputs have changed.",
"### Assets\n\n\nThe following assets are defined by the project. They can\nbe fetched by running 'weasel assets'\nin the project directory.\n\n\nFile: 'corpus/labels/URL', Source: Local, Description: JSON file containing NER labels\nFile: 'corpus/labels/URL', Source: Local, Description: JSON file containing parser labels\nFile: 'corpus/labels/URL', Source: Local, Description: JSON file containing tagger labels\nFile: 'corpus/labels/textcat\\_multilabel.json', Source: Local, Description: JSON file containing multilabel text classification labels\nFile: 'data/URL', Source: Local, Description: JSONL file containing evaluation data\nFile: 'data/firstStep\\_file.jsonl', Source: Local, Description: JSONL file containing formatted data from the first step\nFile: 'data/five\\_examples\\_annotated5.jsonl', Source: Local, Description: JSONL file containing five annotated examples\nFile: 'data/URL', Source: Local, Description: JSONL file containing golden evaluation data\nFile: 'data/thirdStep\\_file.jsonl', Source: Local, Description: JSONL file containing classified data from the third step\nFile: 'data/URL', Source: Local, Description: JSONL file containing training data\nFile: 'data/URL', Source: Local, Description: JSONL file containing initial training data\nFile: 'data/URL', Source: Local, Description: JSONL file containing formatted and labeled training data\nFile: 'my\\_trained\\_model/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the text classification model\nFile: 'my\\_trained\\_model/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the text classification model\nFile: 'my\\_trained\\_model/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary\nFile: 'my\\_trained\\_model/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary\nFile: 'my\\_trained\\_model/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary\nFile: 'my\\_trained\\_model/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary\nFile: 'my\\_trained\\_model/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary\nFile: 'my\\_trained\\_model/URL', Source: Local, Description: Configuration file for the trained model\nFile: 'my\\_trained\\_model/URL', Source: Local, Description: JSON file containing metadata for the trained model\nFile: 'my\\_trained\\_model/tokenizer', Source: Local, Description: Tokenizer files for the trained model\nFile: 'output/experiment1/model-best/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the best model in experiment 1\nFile: 'output/experiment1/model-best/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/URL', Source: Local, Description: Configuration file for the best model in experiment 1\nFile: 'output/experiment1/model-best/URL', Source: Local, Description: JSON file containing metadata for the best model in experiment 1\nFile: 'output/experiment1/model-best/tokenizer', Source: Local, Description: Tokenizer files for the best model in experiment 1\nFile: 'output/experiment1/model-last/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the last model in experiment 1\nFile: 'output/experiment1/model-last/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/URL', Source: Local, Description: Configuration file for the last model in experiment 1\nFile: 'output/experiment1/model-last/URL', Source: Local, Description: JSON file containing metadata for the last model in experiment 1\nFile: 'output/experiment1/model-last/tokenizer', Source: Local, Description: Tokenizer files for the last model in experiment 1\nFile: 'output/experiment3/model-best/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the best model in experiment 3\nFile: 'output/experiment3/model-best/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/URL', Source: Local, Description: Configuration file for the best model in experiment 3\nFile: 'output/experiment3/model-best/URL', Source: Local, Description: JSON file containing metadata for the best model in experiment 3\nFile: 'output/experiment3/model-best/tokenizer', Source: Local, Description: Tokenizer files for the best model in experiment 3\nFile: 'output/experiment3/model-last/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the last model in experiment 3\nFile: 'output/experiment3/model-last/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/URL', Source: Local, Description: Configuration file for the last model in experiment 3\nFile: 'output/experiment3/model-last/URL', Source: Local, Description: JSON file containing metadata for the last model in experiment 3\nFile: 'output/experiment3/model-last/tokenizer', Source: Local, Description: Tokenizer files for the last model in experiment 3\nFile: 'python\\_Code/URL', Source: Local, Description: Python script for formatting labeled data in the final step\nFile: 'python\\_Code/URL', Source: Local, Description: Python script for formatting data in the first step\nFile: 'python\\_Code/five\\_examples\\_annotated.ipynb', Source: Local, Description: Jupyter notebook containing five annotated examples\nFile: 'python\\_Code/URL', Source: Local, Description: Python script for scoring data in the second step\nFile: 'python\\_Code/URL', Source: Local, Description: Python script for labeling data in the third step\nFile: 'python\\_Code/train\\_eval\\_split.ipynb', Source: Local, Description: Jupyter notebook for training and evaluation data splitting\nFile: 'URL', Source: Local, Description: Text file containing terminal code\nFile: 'URL', Source: Local, Description: Markdown file containing project documentation\nFile: 'URL', Source: Local, Description: JSON file containing Prodigy configuration"
] | [
17,
1546,
56,
2439
] | [
"TAGS\n#machine learning #natural language processing #huggingface #en #region-us \n### โฏ Commands\n\n\nThe following commands are defined by the project. They\ncan be executed using ['weasel run [name]'](URL\nCommands are only re-run if their inputs have changed.\n\n\n\nUsage:\n\n\nExplanation:\n\n\n* The script 'URL' reads data from the file specified in the 'dataset\\_file' variable ('data/URL' by default).\n* It extracts text and labels from each JSON object in the dataset file.\n* If both text and at least one label are available, it writes a new JSON object to the output file specified in the 'output\\_file' variable ('data/firstStep\\_file.jsonl' by default) with the extracted text and label.\n* If either text or label is missing in a JSON object, a warning message is printed.\n* Upon completion, the script prints a message confirming the processing and the path to the output file.\n|\n| 'train-text-classification-model' | Train the text classification model for the second step of the project using the 'URL' script. This script loads a blank English spaCy model and adds a text classification pipeline to it. It then trains the model using the processed data from the first step.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The script 'URL' loads a blank English spaCy model and adds a text classification pipeline to it.\n* It reads processed data from the file specified in the 'processed\\_data\\_file' variable ('data/firstStep\\_file.jsonl' by default).\n* The processed data is converted to spaCy format for training the model.\n* The model is trained using the converted data for a specified number of iterations ('n\\_iter').\n* Losses are printed for each iteration during training.\n* Upon completion, the trained model is saved to the specified output directory ('./my\\_trained\\_model' by default).\n|\n| 'classify-unlabeled-data' | Classify the unlabeled data for the third step of the project using the 'URL' script. This script loads the trained spaCy model from the previous step and classifies each record in the unlabeled dataset.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The script 'URL' loads the trained spaCy model from the specified model directory ('./my\\_trained\\_model' by default).\n* It reads the unlabeled data from the file specified in the 'unlabeled\\_data\\_file' variable ('data/URL' by default).\n* Each record in the unlabeled data is classified using the loaded model.\n* The predicted labels for each record are extracted and stored along with the text.\n* The classified data is optionally saved to a file specified in the 'output\\_file' variable ('data/thirdStep\\_file.jsonl' by default).\n|\n| 'format-labeled-data' | Format the labeled data for the final step of the project using the 'URL' script. This script processes the classified data from the third step and transforms it into a specific format, considering a threshold for label acceptance.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The script 'URL' reads classified data from the file specified in the 'input\\_file' variable ('data/thirdStep\\_file.jsonl' by default).\n* For each record, it determines accepted categories based on a specified threshold.\n* It constructs an output record containing the text, predicted labels, accepted categories, answer (accept/reject), and options with meta information.\n* The transformed data is written to the file specified in the 'output\\_file' variable ('data/URL' by default).\n|\n| 'setup-environment' | Set up the Python virtual environment.\n|\n| 'review-evaluation-data' | Review the evaluation data in Prodigy and automatically accept annotations.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command reviews the evaluation data in Prodigy.\n* It automatically accepts annotations made during the review process.\n* Only sessions allowed by the environment variable PRODIGY\\_ALLOWED\\_SESSIONS are permitted to review data. In this case, the session 'reviwer' is allowed.\n|\n| 'export-reviewed-evaluation-data' | Export the reviewed evaluation data from Prodigy to a JSONL file named 'URL'.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command exports the reviewed evaluation data from Prodigy to a JSONL file.\n* The data is exported from the Prodigy database associated with the project named 'project3eval-review'.\n* The exported data is saved to the file 'URL'.\n* This command helps in preserving the reviewed annotations for further analysis or processing.\n|\n| 'import-training-data' | Import the training data into Prodigy from a JSONL file named 'URL'.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command imports the training data into Prodigy from the specified JSONL file.\n* The data is imported into the Prodigy database associated with the project named 'prodigy3train'.\n* This command prepares the training data for annotation and model training in Prodigy.\n|\n| 'import-golden-evaluation-data' | Import the golden evaluation data into Prodigy from a JSONL file named 'URL'.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command imports the golden evaluation data into Prodigy from the specified JSONL file.\n* The data is imported into the Prodigy database associated with the project named 'golden3'.\n* This command prepares the golden evaluation data for further analysis and model evaluation in Prodigy.\n|\n| 'train-model-experiment1' | Train a text classification model using Prodigy with the 'prodigy3train' dataset and evaluating on 'golden3'.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command trains a text classification model using Prodigy.\n* It uses the 'prodigy3train' dataset for training and evaluates the model on the 'golden3' dataset.\n* The trained model is saved to the './output/experiment1' directory.\n|\n| 'download-model' | Download the English language model 'en\\_core\\_web\\_lg' from spaCy.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command downloads the English language model 'en\\_core\\_web\\_lg' from spaCy.\n* This model is used as the base model for further data processing and training in the project.\n|\n| 'convert-data-to-spacy-format' | Convert the annotated data from Prodigy to spaCy format using the 'prodigy3train' and 'golden3' datasets.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command converts the annotated data from Prodigy to spaCy format.\n* It uses the 'prodigy3train' and 'golden3' datasets for conversion.\n* The converted data is saved to the './corpus' directory with the base model 'en\\_core\\_web\\_lg'.\n|\n| 'train-custom-model' | Train a custom text classification model using spaCy with the converted data in spaCy format.\n\n\nUsage:\n\n\nExplanation:\n\n\n* The command trains a custom text classification model using spaCy.\n* It uses the converted data in spaCy format located in the './corpus' directory.\n* The model is trained using the configuration defined in 'corpus/URL'.\n|### โญ Workflows\n\n\nThe following workflows are defined by the project. They\ncan be executed using ['weasel run [name]'](URL\nand will run the specified commands in order. Commands are only re-run if their\ninputs have changed.### Assets\n\n\nThe following assets are defined by the project. They can\nbe fetched by running 'weasel assets'\nin the project directory.\n\n\nFile: 'corpus/labels/URL', Source: Local, Description: JSON file containing NER labels\nFile: 'corpus/labels/URL', Source: Local, Description: JSON file containing parser labels\nFile: 'corpus/labels/URL', Source: Local, Description: JSON file containing tagger labels\nFile: 'corpus/labels/textcat\\_multilabel.json', Source: Local, Description: JSON file containing multilabel text classification labels\nFile: 'data/URL', Source: Local, Description: JSONL file containing evaluation data\nFile: 'data/firstStep\\_file.jsonl', Source: Local, Description: JSONL file containing formatted data from the first step\nFile: 'data/five\\_examples\\_annotated5.jsonl', Source: Local, Description: JSONL file containing five annotated examples\nFile: 'data/URL', Source: Local, Description: JSONL file containing golden evaluation data\nFile: 'data/thirdStep\\_file.jsonl', Source: Local, Description: JSONL file containing classified data from the third step\nFile: 'data/URL', Source: Local, Description: JSONL file containing training data\nFile: 'data/URL', Source: Local, Description: JSONL file containing initial training data\nFile: 'data/URL', Source: Local, Description: JSONL file containing formatted and labeled training data\nFile: 'my\\_trained\\_model/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the text classification model\nFile: 'my\\_trained\\_model/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the text classification model\nFile: 'my\\_trained\\_model/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary\nFile: 'my\\_trained\\_model/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary\nFile: 'my\\_trained\\_model/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary\nFile: 'my\\_trained\\_model/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary\nFile: 'my\\_trained\\_model/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary\nFile: 'my\\_trained\\_model/URL', Source: Local, Description: Configuration file for the trained model\nFile: 'my\\_trained\\_model/URL', Source: Local, Description: JSON file containing metadata for the trained model\nFile: 'my\\_trained\\_model/tokenizer', Source: Local, Description: Tokenizer files for the trained model\nFile: 'output/experiment1/model-best/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the best model in experiment 1\nFile: 'output/experiment1/model-best/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the best model in experiment 1\nFile: 'output/experiment1/model-best/URL', Source: Local, Description: Configuration file for the best model in experiment 1\nFile: 'output/experiment1/model-best/URL', Source: Local, Description: JSON file containing metadata for the best model in experiment 1\nFile: 'output/experiment1/model-best/tokenizer', Source: Local, Description: Tokenizer files for the best model in experiment 1\nFile: 'output/experiment1/model-last/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the last model in experiment 1\nFile: 'output/experiment1/model-last/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the last model in experiment 1\nFile: 'output/experiment1/model-last/URL', Source: Local, Description: Configuration file for the last model in experiment 1\nFile: 'output/experiment1/model-last/URL', Source: Local, Description: JSON file containing metadata for the last model in experiment 1\nFile: 'output/experiment1/model-last/tokenizer', Source: Local, Description: Tokenizer files for the last model in experiment 1\nFile: 'output/experiment3/model-best/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the best model in experiment 3\nFile: 'output/experiment3/model-best/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the best model in experiment 3\nFile: 'output/experiment3/model-best/URL', Source: Local, Description: Configuration file for the best model in experiment 3\nFile: 'output/experiment3/model-best/URL', Source: Local, Description: JSON file containing metadata for the best model in experiment 3\nFile: 'output/experiment3/model-best/tokenizer', Source: Local, Description: Tokenizer files for the best model in experiment 3\nFile: 'output/experiment3/model-last/textcat\\_multilabel/cfg', Source: Local, Description: Configuration files for the last model in experiment 3\nFile: 'output/experiment3/model-last/textcat\\_multilabel/model', Source: Local, Description: Trained model files for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/key2row', Source: Local, Description: Mapping from keys to row indices in the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/URL', Source: Local, Description: Binary lookups file for the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/URL', Source: Local, Description: JSON file containing string representations of the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/vectors', Source: Local, Description: Directory containing vector files for the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/vocab/URL', Source: Local, Description: Configuration file for vectors in the vocabulary for the last model in experiment 3\nFile: 'output/experiment3/model-last/URL', Source: Local, Description: Configuration file for the last model in experiment 3\nFile: 'output/experiment3/model-last/URL', Source: Local, Description: JSON file containing metadata for the last model in experiment 3\nFile: 'output/experiment3/model-last/tokenizer', Source: Local, Description: Tokenizer files for the last model in experiment 3\nFile: 'python\\_Code/URL', Source: Local, Description: Python script for formatting labeled data in the final step\nFile: 'python\\_Code/URL', Source: Local, Description: Python script for formatting data in the first step\nFile: 'python\\_Code/five\\_examples\\_annotated.ipynb', Source: Local, Description: Jupyter notebook containing five annotated examples\nFile: 'python\\_Code/URL', Source: Local, Description: Python script for scoring data in the second step\nFile: 'python\\_Code/URL', Source: Local, Description: Python script for labeling data in the third step\nFile: 'python\\_Code/train\\_eval\\_split.ipynb', Source: Local, Description: Jupyter notebook for training and evaluation data splitting\nFile: 'URL', Source: Local, Description: Text file containing terminal code\nFile: 'URL', Source: Local, Description: Markdown file containing project documentation\nFile: 'URL', Source: Local, Description: JSON file containing Prodigy configuration"
] |
text-generation | transformers | <a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a>
# Llama-3 8B Gradient Instruct 1048k
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected].
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.

**Approach:**
- [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
- Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below)
**Infra:**
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
**Data:**
For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
**Progressive Training Details:**
| | 65K | 262K | 524k | 1048k |
|------------------------|-----------|-----------|-----------|-----------|
| Initialize From | LLaMA-3 8B| 65K | 262K | 524k |
| Sequence Length 2^N | 16 | 18 | 19 | 20 |
| RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B |
| Batch Size | 1 | 1 | 16 | 16 |
| Gradient Accumulation Steps | 32 | 16 | 1 | 1 |
| Steps | 30 | 24 | 50 | 50 |
| Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 |
| Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
| # GPUs | 8 | 32 | 512 | 512 |
| GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
| Minutes to Train (Wall)| 202 | 555 | 61 | 87 |
**Quants**:
- [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF)
- [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit)
## The Gradient AI Team
https://gradient.ai/
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
## Contact Us
Drop an email to [[email protected]](mailto:[email protected])
## References
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] https://github.com/jzhang38/EasyContext
----
# Base Model
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos | {"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"} | blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw4.4-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T02:24:05+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| [<img src="URL width="200"/>](URL)
Llama-3 8B Gradient Instruct 1048k
==================================
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@URL.
For more info see our End-to-end development service for custom LLMs and AI systems
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from Crusoe Energy. It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
!image/png
Approach:
* meta-llama/Meta-Llama-3-8B-Instruct as the base
* NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
* Progressive training on increasing context lengths, similar to Large World Model [2] (See details below)
Infra:
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on Crusoe Energy high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
Data:
For training data, we generate long contexts by augmenting SlimPajama.
Progressive Training Details:
Quants:
* GGUF
* MLX-4bit
The Gradient AI Team
--------------------
URL
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
Contact Us
----------
Drop an email to contact@URL
References
----------
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] URL
---
Base Model
==========
Model Details
-------------
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date April 18, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: URL
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
Intended Use
------------
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
How to use
----------
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original 'llama3' codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.
#### Transformers pipeline
#### Transformers AutoModelForCausalLM
### Use with 'llama3'
Please, follow the instructions in the repository
To download Original checkpoints, see the example command below leveraging 'huggingface-cli':
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
Hardware and Software
---------------------
Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.
CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
-------------
Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
Benchmarks
----------
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.
### Base pretrained models
### Instruction tuned models
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
Safety
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
Refusals
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL
#### Critical risks
CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### Cyber Security
We have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.
### Child Safety
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.
Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.
Ethical Considerations and Limitations
--------------------------------------
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at URL
instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {URL
}
Contributors
------------
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
| [
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
52,
42,
6,
13,
429,
8,
6,
270,
280,
72,
115,
118,
126,
2136
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.#### Transformers pipeline#### Transformers AutoModelForCausalLM### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.### Base pretrained models### Instruction tuned models### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-dpo-full-sft-wo-healthsearch_qa
This model is a fine-tuned version of [Minbyul/mistral-7b-wo-healthsearch_qa-sft](https://huggingface.co/Minbyul/mistral-7b-wo-healthsearch_qa-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6746
- Rewards/chosen: -0.0204
- Rewards/rejected: -0.0600
- Rewards/accuracies: 0.6612
- Rewards/margins: 0.0395
- Logps/rejected: -1091.8407
- Logps/chosen: -817.4551
- Logits/rejected: -2.8353
- Logits/chosen: -2.9083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/mistral-7b-wo-healthsearch_qa-sft", "model-index": [{"name": "mistral-7b-dpo-full-sft-wo-healthsearch_qa", "results": []}]} | Minbyul/mistral-7b-dpo-full-sft-wo-healthsearch_qa | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:Minbyul/mistral-7b-wo-healthsearch_qa-sft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T02:26:01+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/mistral-7b-wo-healthsearch_qa-sft #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# mistral-7b-dpo-full-sft-wo-healthsearch_qa
This model is a fine-tuned version of Minbyul/mistral-7b-wo-healthsearch_qa-sft on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6746
- Rewards/chosen: -0.0204
- Rewards/rejected: -0.0600
- Rewards/accuracies: 0.6612
- Rewards/margins: 0.0395
- Logps/rejected: -1091.8407
- Logps/chosen: -817.4551
- Logits/rejected: -2.8353
- Logits/chosen: -2.9083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# mistral-7b-dpo-full-sft-wo-healthsearch_qa\n\nThis model is a fine-tuned version of Minbyul/mistral-7b-wo-healthsearch_qa-sft on the HuggingFaceH4/ultrafeedback_binarized dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6746\n- Rewards/chosen: -0.0204\n- Rewards/rejected: -0.0600\n- Rewards/accuracies: 0.6612\n- Rewards/margins: 0.0395\n- Logps/rejected: -1091.8407\n- Logps/chosen: -817.4551\n- Logits/rejected: -2.8353\n- Logits/chosen: -2.9083",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.0.dev0\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/mistral-7b-wo-healthsearch_qa-sft #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# mistral-7b-dpo-full-sft-wo-healthsearch_qa\n\nThis model is a fine-tuned version of Minbyul/mistral-7b-wo-healthsearch_qa-sft on the HuggingFaceH4/ultrafeedback_binarized dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6746\n- Rewards/chosen: -0.0204\n- Rewards/rejected: -0.0600\n- Rewards/accuracies: 0.6612\n- Rewards/margins: 0.0395\n- Logps/rejected: -1091.8407\n- Logps/chosen: -817.4551\n- Logits/rejected: -2.8353\n- Logits/chosen: -2.9083",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.0.dev0\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
101,
178,
7,
9,
9,
4,
155,
5,
43
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/mistral-7b-wo-healthsearch_qa-sft #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# mistral-7b-dpo-full-sft-wo-healthsearch_qa\n\nThis model is a fine-tuned version of Minbyul/mistral-7b-wo-healthsearch_qa-sft on the HuggingFaceH4/ultrafeedback_binarized dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6746\n- Rewards/chosen: -0.0204\n- Rewards/rejected: -0.0600\n- Rewards/accuracies: 0.6612\n- Rewards/margins: 0.0395\n- Logps/rejected: -1091.8407\n- Logps/chosen: -817.4551\n- Logits/rejected: -2.8353\n- Logits/chosen: -2.9083## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.39.0.dev0\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA12
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5515 | 0.09 | 10 | 0.1735 |
| 0.1665 | 0.18 | 20 | 0.1565 |
| 0.1531 | 0.27 | 30 | 0.1592 |
| 0.1558 | 0.36 | 40 | 0.1489 |
| 0.1489 | 0.45 | 50 | 0.1490 |
| 0.1518 | 0.54 | 60 | 0.1497 |
| 0.1517 | 0.63 | 70 | 0.1472 |
| 0.1485 | 0.73 | 80 | 0.1536 |
| 0.1467 | 0.82 | 90 | 0.1476 |
| 0.15 | 0.91 | 100 | 0.1674 |
| 0.1763 | 1.0 | 110 | 0.1856 |
| 1.0647 | 1.09 | 120 | 8.3962 |
| 5.0664 | 1.18 | 130 | 1.3023 |
| 1.0961 | 1.27 | 140 | 0.9335 |
| 0.6186 | 1.36 | 150 | 0.4091 |
| 0.41 | 1.45 | 160 | 0.4651 |
| 0.3489 | 1.54 | 170 | 0.2977 |
| 0.2826 | 1.63 | 180 | 0.2353 |
| 0.2238 | 1.72 | 190 | 0.2088 |
| 0.1962 | 1.81 | 200 | 0.1988 |
| 0.1893 | 1.9 | 210 | 0.1917 |
| 0.1879 | 1.99 | 220 | 0.1814 |
| 0.173 | 2.08 | 230 | 0.1894 |
| 0.1753 | 2.18 | 240 | 0.1669 |
| 0.1573 | 2.27 | 250 | 0.1580 |
| 0.1531 | 2.36 | 260 | 0.1547 |
| 0.1429 | 2.45 | 270 | 0.1496 |
| 0.1464 | 2.54 | 280 | 0.1471 |
| 0.1387 | 2.63 | 290 | 0.1482 |
| 0.1414 | 2.72 | 300 | 0.1460 |
| 0.1477 | 2.81 | 310 | 0.1461 |
| 0.1425 | 2.9 | 320 | 0.1466 |
| 0.1399 | 2.99 | 330 | 0.1467 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA12", "results": []}]} | Litzy619/O0428HMA12 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T02:26:20+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA12
==========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1467
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
35,
160,
5,
47
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cilantro9246/r9zwfd1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T02:30:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/20pj7c8 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T02:32:19+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | lleticiasilvaa/1B-datasetMenor-10epochs | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T02:32:37+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
47,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | abc88767/model13 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:32:38+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
41,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA22
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4529 | 0.09 | 10 | 0.1637 |
| 0.1607 | 0.18 | 20 | 0.1594 |
| 0.1523 | 0.27 | 30 | 0.1619 |
| 0.1562 | 0.36 | 40 | 0.1498 |
| 0.1516 | 0.45 | 50 | 0.1536 |
| 0.1533 | 0.54 | 60 | 0.1494 |
| 0.1507 | 0.63 | 70 | 0.1481 |
| 0.1494 | 0.73 | 80 | 0.1566 |
| 0.1481 | 0.82 | 90 | 0.1476 |
| 0.1486 | 0.91 | 100 | 0.1493 |
| 0.1506 | 1.0 | 110 | 0.1496 |
| 0.1464 | 1.09 | 120 | 0.1483 |
| 0.1465 | 1.18 | 130 | 0.1523 |
| 0.148 | 1.27 | 140 | 0.1493 |
| 0.1512 | 1.36 | 150 | 0.1502 |
| 0.147 | 1.45 | 160 | 0.1495 |
| 0.1453 | 1.54 | 170 | 0.1470 |
| 0.1477 | 1.63 | 180 | 0.1460 |
| 0.1476 | 1.72 | 190 | 0.1500 |
| 0.145 | 1.81 | 200 | 0.1482 |
| 0.1483 | 1.9 | 210 | 0.1451 |
| 0.139 | 1.99 | 220 | 0.1258 |
| 0.0991 | 2.08 | 230 | 0.0957 |
| 0.1018 | 2.18 | 240 | 0.0760 |
| 0.0642 | 2.27 | 250 | 0.0672 |
| 0.0644 | 2.36 | 260 | 0.0607 |
| 0.0533 | 2.45 | 270 | 0.0558 |
| 0.0475 | 2.54 | 280 | 0.0542 |
| 0.0509 | 2.63 | 290 | 0.0499 |
| 0.0512 | 2.72 | 300 | 0.0486 |
| 0.0478 | 2.81 | 310 | 0.0488 |
| 0.0466 | 2.9 | 320 | 0.0471 |
| 0.0504 | 2.99 | 330 | 0.0467 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA22", "results": []}]} | Litzy619/O0428HMA22 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T02:33:33+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA22
==========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0467
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 80
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
35,
160,
5,
47
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA21
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4312 | 0.09 | 10 | 0.1993 |
| 0.1645 | 0.18 | 20 | 0.1553 |
| 0.1493 | 0.27 | 30 | 0.1641 |
| 0.1576 | 0.36 | 40 | 0.1525 |
| 0.1525 | 0.45 | 50 | 0.1490 |
| 0.1538 | 0.54 | 60 | 0.1493 |
| 0.1506 | 0.63 | 70 | 0.1472 |
| 0.1497 | 0.73 | 80 | 0.1536 |
| 0.1472 | 0.82 | 90 | 0.1494 |
| 0.1484 | 0.91 | 100 | 0.1478 |
| 0.1422 | 1.0 | 110 | 0.1043 |
| 0.6143 | 1.09 | 120 | 0.1460 |
| 0.1612 | 1.18 | 130 | 0.1327 |
| 0.1067 | 1.27 | 140 | 0.0796 |
| 0.3298 | 1.36 | 150 | 0.0890 |
| 0.0715 | 1.45 | 160 | 0.0631 |
| 0.0578 | 1.54 | 170 | 0.0577 |
| 0.0614 | 1.63 | 180 | 0.0570 |
| 0.063 | 1.72 | 190 | 0.0554 |
| 0.0561 | 1.81 | 200 | 0.0554 |
| 0.0561 | 1.9 | 210 | 0.0580 |
| 0.0568 | 1.99 | 220 | 0.0554 |
| 0.0559 | 2.08 | 230 | 0.0528 |
| 0.0546 | 2.18 | 240 | 0.0597 |
| 0.0577 | 2.27 | 250 | 0.0600 |
| 0.0592 | 2.36 | 260 | 0.0560 |
| 0.0547 | 2.45 | 270 | 0.0537 |
| 0.0517 | 2.54 | 280 | 0.0530 |
| 0.0524 | 2.63 | 290 | 0.0541 |
| 0.0532 | 2.72 | 300 | 0.0514 |
| 0.0531 | 2.81 | 310 | 0.0512 |
| 0.0546 | 2.9 | 320 | 0.0514 |
| 0.0547 | 2.99 | 330 | 0.0514 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA21", "results": []}]} | Litzy619/O0428HMA21 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T02:33:35+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA21
==========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0514
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 80
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
35,
160,
5,
47
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA24
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3605 | 0.09 | 10 | 0.1809 |
| 0.1688 | 0.18 | 20 | 0.1604 |
| 0.1494 | 0.27 | 30 | 0.1601 |
| 0.1569 | 0.36 | 40 | 0.1538 |
| 0.1533 | 0.45 | 50 | 0.1535 |
| 0.1529 | 0.54 | 60 | 0.1502 |
| 0.1499 | 0.63 | 70 | 0.1480 |
| 0.15 | 0.73 | 80 | 0.1548 |
| 0.1475 | 0.82 | 90 | 0.1495 |
| 0.1479 | 0.91 | 100 | 0.1459 |
| 0.1355 | 1.0 | 110 | 0.1022 |
| 0.2371 | 1.09 | 120 | 0.1226 |
| 0.1134 | 1.18 | 130 | 0.0893 |
| 0.0964 | 1.27 | 140 | 0.0853 |
| 0.0865 | 1.36 | 150 | 0.0728 |
| 0.0896 | 1.45 | 160 | 0.0597 |
| 0.0643 | 1.54 | 170 | 0.0606 |
| 0.0606 | 1.63 | 180 | 0.0574 |
| 0.0631 | 1.72 | 190 | 0.0569 |
| 0.0577 | 1.81 | 200 | 0.0625 |
| 0.0584 | 1.9 | 210 | 0.0613 |
| 0.0601 | 1.99 | 220 | 0.0564 |
| 0.0582 | 2.08 | 230 | 0.0578 |
| 0.0548 | 2.18 | 240 | 0.0587 |
| 0.0561 | 2.27 | 250 | 0.0592 |
| 0.061 | 2.36 | 260 | 0.0571 |
| 0.0534 | 2.45 | 270 | 0.0559 |
| 0.052 | 2.54 | 280 | 0.0556 |
| 0.0549 | 2.63 | 290 | 0.0571 |
| 0.0568 | 2.72 | 300 | 0.0551 |
| 0.0567 | 2.81 | 310 | 0.0549 |
| 0.0577 | 2.9 | 320 | 0.0551 |
| 0.0607 | 2.99 | 330 | 0.0551 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA24", "results": []}]} | Litzy619/O0428HMA24 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T02:34:21+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA24
==========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0551
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 80
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
35,
160,
5,
47
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft-facebook-bart-large-xsum-on-samsum
This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5249
- Rouge1: 50.3616
- Rouge2: 25.1246
- Rougel: 41.214
- Rougelsum: 46.1946
- Gen Len: 26.423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 0.11 | 100 | 1.5514 | 49.1738 | 23.682 | 40.0793 | 44.8382 | 26.0818 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "facebook/bart-large-xsum", "model-index": [{"name": "ft-facebook-bart-large-xsum-on-samsum", "results": []}]} | mrami010/ft-facebook-bart-large-xsum-on-samsum | null | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-xsum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:35:08+00:00 | [] | [] | TAGS
#transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-xsum #license-mit #autotrain_compatible #endpoints_compatible #region-us
| ft-facebook-bart-large-xsum-on-samsum
=====================================
This model is a fine-tuned version of facebook/bart-large-xsum on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5249
* Rouge1: 50.3616
* Rouge2: 25.1246
* Rougel: 41.214
* Rougelsum: 46.1946
* Gen Len: 26.423
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 100
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.37.2
* Pytorch 2.2.1+cu121
* Datasets 2.17.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.17.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-xsum #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.17.0\n* Tokenizers 0.15.2"
] | [
52,
133,
5,
44
] | [
"TAGS\n#transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-xsum #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 100\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.17.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5103
- F1 Score: 0.7719
- Accuracy: 0.7727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5815 | 1.01 | 200 | 0.5460 | 0.7425 | 0.7437 |
| 0.5388 | 2.02 | 400 | 0.5339 | 0.7539 | 0.7557 |
| 0.5267 | 3.03 | 600 | 0.5203 | 0.7634 | 0.7642 |
| 0.5157 | 4.04 | 800 | 0.5263 | 0.7570 | 0.7592 |
| 0.5107 | 5.05 | 1000 | 0.5209 | 0.7684 | 0.7689 |
| 0.503 | 6.06 | 1200 | 0.5166 | 0.7622 | 0.7636 |
| 0.4945 | 7.07 | 1400 | 0.5246 | 0.7603 | 0.7623 |
| 0.491 | 8.08 | 1600 | 0.5230 | 0.7634 | 0.7645 |
| 0.4814 | 9.09 | 1800 | 0.5138 | 0.7632 | 0.7648 |
| 0.4748 | 10.1 | 2000 | 0.5255 | 0.7538 | 0.7563 |
| 0.4648 | 11.11 | 2200 | 0.5249 | 0.7613 | 0.7629 |
| 0.4588 | 12.12 | 2400 | 0.5281 | 0.7497 | 0.7509 |
| 0.4516 | 13.13 | 2600 | 0.5384 | 0.7542 | 0.7573 |
| 0.447 | 14.14 | 2800 | 0.5295 | 0.7590 | 0.7598 |
| 0.4346 | 15.15 | 3000 | 0.5380 | 0.7577 | 0.7579 |
| 0.4293 | 16.16 | 3200 | 0.5431 | 0.7446 | 0.7456 |
| 0.422 | 17.17 | 3400 | 0.5519 | 0.7602 | 0.7610 |
| 0.4181 | 18.18 | 3600 | 0.5535 | 0.7426 | 0.7456 |
| 0.4024 | 19.19 | 3800 | 0.5521 | 0.7456 | 0.7472 |
| 0.3964 | 20.2 | 4000 | 0.5623 | 0.7467 | 0.7481 |
| 0.3941 | 21.21 | 4200 | 0.5572 | 0.7504 | 0.7519 |
| 0.3824 | 22.22 | 4400 | 0.5833 | 0.7475 | 0.7478 |
| 0.3755 | 23.23 | 4600 | 0.5835 | 0.7469 | 0.7472 |
| 0.3746 | 24.24 | 4800 | 0.5921 | 0.7447 | 0.7472 |
| 0.3647 | 25.25 | 5000 | 0.5953 | 0.7334 | 0.7333 |
| 0.3623 | 26.26 | 5200 | 0.5986 | 0.7351 | 0.7355 |
| 0.3515 | 27.27 | 5400 | 0.6126 | 0.7301 | 0.7323 |
| 0.3485 | 28.28 | 5600 | 0.6078 | 0.7370 | 0.7380 |
| 0.3441 | 29.29 | 5800 | 0.6272 | 0.7363 | 0.7371 |
| 0.3326 | 30.3 | 6000 | 0.6436 | 0.7388 | 0.7386 |
| 0.3347 | 31.31 | 6200 | 0.6255 | 0.7368 | 0.7377 |
| 0.3316 | 32.32 | 6400 | 0.6361 | 0.7294 | 0.7311 |
| 0.3216 | 33.33 | 6600 | 0.6443 | 0.7279 | 0.7301 |
| 0.3179 | 34.34 | 6800 | 0.6395 | 0.7278 | 0.7282 |
| 0.3067 | 35.35 | 7000 | 0.6541 | 0.7329 | 0.7333 |
| 0.3097 | 36.36 | 7200 | 0.6668 | 0.7239 | 0.7251 |
| 0.3056 | 37.37 | 7400 | 0.6633 | 0.7266 | 0.7282 |
| 0.3005 | 38.38 | 7600 | 0.6693 | 0.7229 | 0.7232 |
| 0.2895 | 39.39 | 7800 | 0.6951 | 0.7264 | 0.7266 |
| 0.2925 | 40.4 | 8000 | 0.6964 | 0.7239 | 0.7244 |
| 0.2902 | 41.41 | 8200 | 0.6895 | 0.7276 | 0.7295 |
| 0.2883 | 42.42 | 8400 | 0.7034 | 0.7224 | 0.7244 |
| 0.2851 | 43.43 | 8600 | 0.7049 | 0.7226 | 0.7235 |
| 0.2807 | 44.44 | 8800 | 0.7085 | 0.7212 | 0.7219 |
| 0.2805 | 45.45 | 9000 | 0.7033 | 0.7229 | 0.7241 |
| 0.2813 | 46.46 | 9200 | 0.7042 | 0.7242 | 0.7247 |
| 0.2779 | 47.47 | 9400 | 0.7097 | 0.7203 | 0.7213 |
| 0.2705 | 48.48 | 9600 | 0.7155 | 0.7222 | 0.7229 |
| 0.2768 | 49.49 | 9800 | 0.7125 | 0.7171 | 0.7181 |
| 0.2705 | 50.51 | 10000 | 0.7124 | 0.7231 | 0.7238 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T02:35:24+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_16384\_512\_56M-L32\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5103
* F1 Score: 0.7719
* Accuracy: 0.7727
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
sentence-similarity | peft |
# LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.
- **Repository:** https://github.com/McGill-NLP/llm2vec
- **Paper:** https://arxiv.org/abs/2404.05961
## Installation
```bash
pip install llm2vec
```
## Usage
```python
from llm2vec import LLM2Vec
import torch
from transformers import AutoTokenizer, AutoModel, AutoConfig
from peft import PeftModel
# Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model.
tokenizer = AutoTokenizer.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp"
)
config = AutoConfig.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", trust_remote_code=True
)
model = AutoModel.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp",
trust_remote_code=True,
config=config,
torch_dtype=torch.bfloat16,
device_map="cuda" if torch.cuda.is_available() else "cpu",
)
model = PeftModel.from_pretrained(
model,
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp",
)
model = model.merge_and_unload() # This can take several minutes on cpu
# Loading supervised model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + supervised (LoRA).
model = PeftModel.from_pretrained(
model, "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised"
)
# Wrapper for encoding and pooling operations
l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512)
# Encoding queries using instructions
instruction = (
"Given a web search query, retrieve relevant passages that answer the query:"
)
queries = [
[instruction, "how much protein should a female eat"],
[instruction, "summit define"],
]
q_reps = l2v.encode(queries)
# Encoding documents. Instruction are not required for documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
d_reps = l2v.encode(documents)
# Compute cosine similarity
q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1)
d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1)
cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1))
print(cos_sim)
"""
tensor([[0.6470, 0.1619],
[0.0786, 0.5844]])
"""
```
## Questions
If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`). | {"language": ["en"], "license": "mit", "library_name": "peft", "tags": ["text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "text-reranking", "feature-extraction", "sentence-similarity", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb"], "pipeline_tag": "sentence-similarity", "model-index": [{"name": "LLM2Vec-Meta-Llama-3-supervised", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 79.94029850746269}, {"type": "ap", "value": 44.93223506764482}, {"type": "f1", "value": 74.30328994013465}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 86.06680000000001}, {"type": "ap", "value": 81.97124658709345}, {"type": "f1", "value": 86.00558036874241}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 46.836}, {"type": "f1", "value": 46.05094679201488}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 37.980000000000004}, {"type": "map_at_10", "value": 54.167}, {"type": "map_at_100", "value": 54.735}, {"type": "map_at_1000", "value": 54.738}, {"type": "map_at_3", "value": 49.384}, {"type": "map_at_5", "value": 52.285000000000004}, {"type": "mrr_at_1", "value": 38.549}, {"type": "mrr_at_10", "value": 54.351000000000006}, {"type": "mrr_at_100", "value": 54.932}, {"type": "mrr_at_1000", "value": 54.935}, {"type": "mrr_at_3", "value": 49.585}, {"type": "mrr_at_5", "value": 52.469}, {"type": "ndcg_at_1", "value": 37.980000000000004}, {"type": "ndcg_at_10", "value": 62.778999999999996}, {"type": "ndcg_at_100", "value": 64.986}, {"type": "ndcg_at_1000", "value": 65.036}, {"type": "ndcg_at_3", "value": 53.086999999999996}, {"type": "ndcg_at_5", "value": 58.263}, {"type": "precision_at_1", "value": 37.980000000000004}, {"type": "precision_at_10", "value": 9.011}, {"type": "precision_at_100", "value": 0.993}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 21.266}, {"type": "precision_at_5", "value": 15.248999999999999}, {"type": "recall_at_1", "value": 37.980000000000004}, {"type": "recall_at_10", "value": 90.114}, {"type": "recall_at_100", "value": 99.289}, {"type": "recall_at_1000", "value": 99.644}, {"type": "recall_at_3", "value": 63.798}, {"type": "recall_at_5", "value": 76.24499999999999}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 44.27081216556421}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 46.8490872532913}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 65.18525400430678}, {"type": "mrr", "value": 78.80149936244119}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_spearman", "value": 84.92301936595548}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 88.0487012987013}, {"type": "f1", "value": 88.00953788281542}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 32.34687321141145}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 36.69881680534123}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "cqadupstack/android", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 37.742}, {"type": "map_at_10", "value": 51.803}, {"type": "map_at_100", "value": 53.556000000000004}, {"type": "map_at_1000", "value": 53.652}, {"type": "map_at_3", "value": 47.286}, {"type": "map_at_5", "value": 50.126000000000005}, {"type": "mrr_at_1", "value": 46.924}, {"type": "mrr_at_10", "value": 57.857}, {"type": "mrr_at_100", "value": 58.592}, {"type": "mrr_at_1000", "value": 58.619}, {"type": "mrr_at_3", "value": 55.340999999999994}, {"type": "mrr_at_5", "value": 57.150999999999996}, {"type": "ndcg_at_1", "value": 46.924}, {"type": "ndcg_at_10", "value": 58.733999999999995}, {"type": "ndcg_at_100", "value": 63.771}, {"type": "ndcg_at_1000", "value": 64.934}, {"type": "ndcg_at_3", "value": 53.189}, {"type": "ndcg_at_5", "value": 56.381}, {"type": "precision_at_1", "value": 46.924}, {"type": "precision_at_10", "value": 11.431}, {"type": "precision_at_100", "value": 1.73}, {"type": "precision_at_1000", "value": 0.213}, {"type": "precision_at_3", "value": 25.942}, {"type": "precision_at_5", "value": 19.113}, {"type": "recall_at_1", "value": 37.742}, {"type": "recall_at_10", "value": 71.34}, {"type": "recall_at_100", "value": 91.523}, {"type": "recall_at_1000", "value": 98.494}, {"type": "recall_at_3", "value": 55.443}, {"type": "recall_at_5", "value": 64.122}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackEnglishRetrieval", "type": "cqadupstack/english", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 34.183}, {"type": "map_at_10", "value": 46.837}, {"type": "map_at_100", "value": 48.126000000000005}, {"type": "map_at_1000", "value": 48.25}, {"type": "map_at_3", "value": 43.171}, {"type": "map_at_5", "value": 45.318999999999996}, {"type": "mrr_at_1", "value": 43.376}, {"type": "mrr_at_10", "value": 52.859}, {"type": "mrr_at_100", "value": 53.422000000000004}, {"type": "mrr_at_1000", "value": 53.456}, {"type": "mrr_at_3", "value": 50.434999999999995}, {"type": "mrr_at_5", "value": 51.861999999999995}, {"type": "ndcg_at_1", "value": 43.376}, {"type": "ndcg_at_10", "value": 53.223}, {"type": "ndcg_at_100", "value": 57.175}, {"type": "ndcg_at_1000", "value": 58.86900000000001}, {"type": "ndcg_at_3", "value": 48.417}, {"type": "ndcg_at_5", "value": 50.77}, {"type": "precision_at_1", "value": 43.376}, {"type": "precision_at_10", "value": 10.236}, {"type": "precision_at_100", "value": 1.5730000000000002}, {"type": "precision_at_1000", "value": 0.203}, {"type": "precision_at_3", "value": 23.97}, {"type": "precision_at_5", "value": 17.134}, {"type": "recall_at_1", "value": 34.183}, {"type": "recall_at_10", "value": 64.866}, {"type": "recall_at_100", "value": 81.26100000000001}, {"type": "recall_at_1000", "value": 91.412}, {"type": "recall_at_3", "value": 50.080000000000005}, {"type": "recall_at_5", "value": 56.871}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGamingRetrieval", "type": "cqadupstack/gaming", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 44.878}, {"type": "map_at_10", "value": 58.656}, {"type": "map_at_100", "value": 59.668}, {"type": "map_at_1000", "value": 59.704}, {"type": "map_at_3", "value": 54.891}, {"type": "map_at_5", "value": 57.050999999999995}, {"type": "mrr_at_1", "value": 51.975}, {"type": "mrr_at_10", "value": 62.357}, {"type": "mrr_at_100", "value": 62.907999999999994}, {"type": "mrr_at_1000", "value": 62.925}, {"type": "mrr_at_3", "value": 59.801}, {"type": "mrr_at_5", "value": 61.278}, {"type": "ndcg_at_1", "value": 51.975}, {"type": "ndcg_at_10", "value": 64.95100000000001}, {"type": "ndcg_at_100", "value": 68.414}, {"type": "ndcg_at_1000", "value": 69.077}, {"type": "ndcg_at_3", "value": 58.897999999999996}, {"type": "ndcg_at_5", "value": 61.866}, {"type": "precision_at_1", "value": 51.975}, {"type": "precision_at_10", "value": 10.502}, {"type": "precision_at_100", "value": 1.31}, {"type": "precision_at_1000", "value": 0.13899999999999998}, {"type": "precision_at_3", "value": 26.290000000000003}, {"type": "precision_at_5", "value": 18.093999999999998}, {"type": "recall_at_1", "value": 44.878}, {"type": "recall_at_10", "value": 79.746}, {"type": "recall_at_100", "value": 94.17}, {"type": "recall_at_1000", "value": 98.80499999999999}, {"type": "recall_at_3", "value": 63.70099999999999}, {"type": "recall_at_5", "value": 70.878}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGisRetrieval", "type": "cqadupstack/gis", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 28.807}, {"type": "map_at_10", "value": 39.431}, {"type": "map_at_100", "value": 40.56}, {"type": "map_at_1000", "value": 40.617999999999995}, {"type": "map_at_3", "value": 36.436}, {"type": "map_at_5", "value": 37.955}, {"type": "mrr_at_1", "value": 31.186000000000003}, {"type": "mrr_at_10", "value": 41.654}, {"type": "mrr_at_100", "value": 42.58}, {"type": "mrr_at_1000", "value": 42.623}, {"type": "mrr_at_3", "value": 38.983000000000004}, {"type": "mrr_at_5", "value": 40.35}, {"type": "ndcg_at_1", "value": 31.186000000000003}, {"type": "ndcg_at_10", "value": 45.297}, {"type": "ndcg_at_100", "value": 50.515}, {"type": "ndcg_at_1000", "value": 52.005}, {"type": "ndcg_at_3", "value": 39.602}, {"type": "ndcg_at_5", "value": 42.027}, {"type": "precision_at_1", "value": 31.186000000000003}, {"type": "precision_at_10", "value": 7.073}, {"type": "precision_at_100", "value": 1.0210000000000001}, {"type": "precision_at_1000", "value": 0.11900000000000001}, {"type": "precision_at_3", "value": 17.1}, {"type": "precision_at_5", "value": 11.729000000000001}, {"type": "recall_at_1", "value": 28.807}, {"type": "recall_at_10", "value": 61.138999999999996}, {"type": "recall_at_100", "value": 84.491}, {"type": "recall_at_1000", "value": 95.651}, {"type": "recall_at_3", "value": 45.652}, {"type": "recall_at_5", "value": 51.522}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackMathematicaRetrieval", "type": "cqadupstack/mathematica", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 20.607}, {"type": "map_at_10", "value": 31.944}, {"type": "map_at_100", "value": 33.317}, {"type": "map_at_1000", "value": 33.428000000000004}, {"type": "map_at_3", "value": 28.508}, {"type": "map_at_5", "value": 30.348999999999997}, {"type": "mrr_at_1", "value": 25.622}, {"type": "mrr_at_10", "value": 36.726}, {"type": "mrr_at_100", "value": 37.707}, {"type": "mrr_at_1000", "value": 37.761}, {"type": "mrr_at_3", "value": 33.934}, {"type": "mrr_at_5", "value": 35.452}, {"type": "ndcg_at_1", "value": 25.622}, {"type": "ndcg_at_10", "value": 38.462}, {"type": "ndcg_at_100", "value": 44.327}, {"type": "ndcg_at_1000", "value": 46.623}, {"type": "ndcg_at_3", "value": 32.583}, {"type": "ndcg_at_5", "value": 35.175}, {"type": "precision_at_1", "value": 25.622}, {"type": "precision_at_10", "value": 7.425}, {"type": "precision_at_100", "value": 1.173}, {"type": "precision_at_1000", "value": 0.149}, {"type": "precision_at_3", "value": 16.418}, {"type": "precision_at_5", "value": 11.866}, {"type": "recall_at_1", "value": 20.607}, {"type": "recall_at_10", "value": 53.337}, {"type": "recall_at_100", "value": 78.133}, {"type": "recall_at_1000", "value": 94.151}, {"type": "recall_at_3", "value": 37.088}, {"type": "recall_at_5", "value": 43.627}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackPhysicsRetrieval", "type": "cqadupstack/physics", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 33.814}, {"type": "map_at_10", "value": 47.609}, {"type": "map_at_100", "value": 48.972}, {"type": "map_at_1000", "value": 49.061}, {"type": "map_at_3", "value": 43.397999999999996}, {"type": "map_at_5", "value": 45.839}, {"type": "mrr_at_1", "value": 42.059999999999995}, {"type": "mrr_at_10", "value": 53.074}, {"type": "mrr_at_100", "value": 53.76800000000001}, {"type": "mrr_at_1000", "value": 53.794}, {"type": "mrr_at_3", "value": 50.241}, {"type": "mrr_at_5", "value": 51.805}, {"type": "ndcg_at_1", "value": 42.059999999999995}, {"type": "ndcg_at_10", "value": 54.419}, {"type": "ndcg_at_100", "value": 59.508}, {"type": "ndcg_at_1000", "value": 60.858000000000004}, {"type": "ndcg_at_3", "value": 48.296}, {"type": "ndcg_at_5", "value": 51.28}, {"type": "precision_at_1", "value": 42.059999999999995}, {"type": "precision_at_10", "value": 10.231}, {"type": "precision_at_100", "value": 1.4789999999999999}, {"type": "precision_at_1000", "value": 0.17700000000000002}, {"type": "precision_at_3", "value": 23.419999999999998}, {"type": "precision_at_5", "value": 16.843}, {"type": "recall_at_1", "value": 33.814}, {"type": "recall_at_10", "value": 68.88}, {"type": "recall_at_100", "value": 89.794}, {"type": "recall_at_1000", "value": 98.058}, {"type": "recall_at_3", "value": 51.915}, {"type": "recall_at_5", "value": 59.704}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackProgrammersRetrieval", "type": "cqadupstack/programmers", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 29.668}, {"type": "map_at_10", "value": 43.032}, {"type": "map_at_100", "value": 44.48}, {"type": "map_at_1000", "value": 44.574000000000005}, {"type": "map_at_3", "value": 38.609}, {"type": "map_at_5", "value": 41.164}, {"type": "mrr_at_1", "value": 37.785000000000004}, {"type": "mrr_at_10", "value": 48.898}, {"type": "mrr_at_100", "value": 49.728}, {"type": "mrr_at_1000", "value": 49.769000000000005}, {"type": "mrr_at_3", "value": 45.909}, {"type": "mrr_at_5", "value": 47.61}, {"type": "ndcg_at_1", "value": 37.785000000000004}, {"type": "ndcg_at_10", "value": 50.21099999999999}, {"type": "ndcg_at_100", "value": 55.657999999999994}, {"type": "ndcg_at_1000", "value": 57.172}, {"type": "ndcg_at_3", "value": 43.726}, {"type": "ndcg_at_5", "value": 46.758}, {"type": "precision_at_1", "value": 37.785000000000004}, {"type": "precision_at_10", "value": 9.669}, {"type": "precision_at_100", "value": 1.4409999999999998}, {"type": "precision_at_1000", "value": 0.174}, {"type": "precision_at_3", "value": 21.651}, {"type": "precision_at_5", "value": 15.822}, {"type": "recall_at_1", "value": 29.668}, {"type": "recall_at_10", "value": 65.575}, {"type": "recall_at_100", "value": 87.977}, {"type": "recall_at_1000", "value": 97.615}, {"type": "recall_at_3", "value": 47.251}, {"type": "recall_at_5", "value": 55.359}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackRetrieval", "type": "mteb/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 30.29925}, {"type": "map_at_10", "value": 41.98708333333333}, {"type": "map_at_100", "value": 43.306916666666666}, {"type": "map_at_1000", "value": 43.40716666666667}, {"type": "map_at_3", "value": 38.431666666666665}, {"type": "map_at_5", "value": 40.4195}, {"type": "mrr_at_1", "value": 36.24483333333334}, {"type": "mrr_at_10", "value": 46.32666666666667}, {"type": "mrr_at_100", "value": 47.13983333333333}, {"type": "mrr_at_1000", "value": 47.18058333333334}, {"type": "mrr_at_3", "value": 43.66799999999999}, {"type": "mrr_at_5", "value": 45.163666666666664}, {"type": "ndcg_at_1", "value": 36.24483333333334}, {"type": "ndcg_at_10", "value": 48.251916666666666}, {"type": "ndcg_at_100", "value": 53.3555}, {"type": "ndcg_at_1000", "value": 55.024249999999995}, {"type": "ndcg_at_3", "value": 42.599583333333335}, {"type": "ndcg_at_5", "value": 45.24166666666666}, {"type": "precision_at_1", "value": 36.24483333333334}, {"type": "precision_at_10", "value": 8.666833333333333}, {"type": "precision_at_100", "value": 1.3214166666666665}, {"type": "precision_at_1000", "value": 0.16475}, {"type": "precision_at_3", "value": 19.9955}, {"type": "precision_at_5", "value": 14.271999999999998}, {"type": "recall_at_1", "value": 30.29925}, {"type": "recall_at_10", "value": 62.232333333333344}, {"type": "recall_at_100", "value": 84.151}, {"type": "recall_at_1000", "value": 95.37333333333333}, {"type": "recall_at_3", "value": 46.45541666666667}, {"type": "recall_at_5", "value": 53.264}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackStatsRetrieval", "type": "cqadupstack/stats", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 28.996}, {"type": "map_at_10", "value": 38.047}, {"type": "map_at_100", "value": 39.121}, {"type": "map_at_1000", "value": 39.202999999999996}, {"type": "map_at_3", "value": 35.376000000000005}, {"type": "map_at_5", "value": 36.763}, {"type": "mrr_at_1", "value": 32.362}, {"type": "mrr_at_10", "value": 40.717999999999996}, {"type": "mrr_at_100", "value": 41.586}, {"type": "mrr_at_1000", "value": 41.641}, {"type": "mrr_at_3", "value": 38.292}, {"type": "mrr_at_5", "value": 39.657}, {"type": "ndcg_at_1", "value": 32.362}, {"type": "ndcg_at_10", "value": 43.105}, {"type": "ndcg_at_100", "value": 48.026}, {"type": "ndcg_at_1000", "value": 49.998}, {"type": "ndcg_at_3", "value": 38.147999999999996}, {"type": "ndcg_at_5", "value": 40.385}, {"type": "precision_at_1", "value": 32.362}, {"type": "precision_at_10", "value": 6.7940000000000005}, {"type": "precision_at_100", "value": 1.0170000000000001}, {"type": "precision_at_1000", "value": 0.125}, {"type": "precision_at_3", "value": 16.411}, {"type": "precision_at_5", "value": 11.35}, {"type": "recall_at_1", "value": 28.996}, {"type": "recall_at_10", "value": 55.955}, {"type": "recall_at_100", "value": 77.744}, {"type": "recall_at_1000", "value": 92.196}, {"type": "recall_at_3", "value": 42.254999999999995}, {"type": "recall_at_5", "value": 47.776}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackTexRetrieval", "type": "cqadupstack/tex", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 20.029}, {"type": "map_at_10", "value": 29.188}, {"type": "map_at_100", "value": 30.484}, {"type": "map_at_1000", "value": 30.608}, {"type": "map_at_3", "value": 26.195}, {"type": "map_at_5", "value": 27.866999999999997}, {"type": "mrr_at_1", "value": 24.57}, {"type": "mrr_at_10", "value": 33.461}, {"type": "mrr_at_100", "value": 34.398}, {"type": "mrr_at_1000", "value": 34.464}, {"type": "mrr_at_3", "value": 30.856}, {"type": "mrr_at_5", "value": 32.322}, {"type": "ndcg_at_1", "value": 24.57}, {"type": "ndcg_at_10", "value": 34.846}, {"type": "ndcg_at_100", "value": 40.544000000000004}, {"type": "ndcg_at_1000", "value": 43.019}, {"type": "ndcg_at_3", "value": 29.683999999999997}, {"type": "ndcg_at_5", "value": 32.11}, {"type": "precision_at_1", "value": 24.57}, {"type": "precision_at_10", "value": 6.535}, {"type": "precision_at_100", "value": 1.11}, {"type": "precision_at_1000", "value": 0.149}, {"type": "precision_at_3", "value": 14.338000000000001}, {"type": "precision_at_5", "value": 10.496}, {"type": "recall_at_1", "value": 20.029}, {"type": "recall_at_10", "value": 47.509}, {"type": "recall_at_100", "value": 72.61999999999999}, {"type": "recall_at_1000", "value": 89.778}, {"type": "recall_at_3", "value": 33.031}, {"type": "recall_at_5", "value": 39.306000000000004}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackUnixRetrieval", "type": "cqadupstack/unix", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 31.753999999999998}, {"type": "map_at_10", "value": 43.814}, {"type": "map_at_100", "value": 45.072}, {"type": "map_at_1000", "value": 45.155}, {"type": "map_at_3", "value": 40.316}, {"type": "map_at_5", "value": 42.15}, {"type": "mrr_at_1", "value": 38.06}, {"type": "mrr_at_10", "value": 48.311}, {"type": "mrr_at_100", "value": 49.145}, {"type": "mrr_at_1000", "value": 49.181000000000004}, {"type": "mrr_at_3", "value": 45.678000000000004}, {"type": "mrr_at_5", "value": 47.072}, {"type": "ndcg_at_1", "value": 38.06}, {"type": "ndcg_at_10", "value": 50.083}, {"type": "ndcg_at_100", "value": 55.342}, {"type": "ndcg_at_1000", "value": 56.87}, {"type": "ndcg_at_3", "value": 44.513999999999996}, {"type": "ndcg_at_5", "value": 46.886}, {"type": "precision_at_1", "value": 38.06}, {"type": "precision_at_10", "value": 8.638}, {"type": "precision_at_100", "value": 1.253}, {"type": "precision_at_1000", "value": 0.149}, {"type": "precision_at_3", "value": 20.709}, {"type": "precision_at_5", "value": 14.44}, {"type": "recall_at_1", "value": 31.753999999999998}, {"type": "recall_at_10", "value": 64.473}, {"type": "recall_at_100", "value": 86.832}, {"type": "recall_at_1000", "value": 96.706}, {"type": "recall_at_3", "value": 48.937000000000005}, {"type": "recall_at_5", "value": 55.214}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWebmastersRetrieval", "type": "cqadupstack/webmasters", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 28.815}, {"type": "map_at_10", "value": 40.595}, {"type": "map_at_100", "value": 42.337}, {"type": "map_at_1000", "value": 42.559000000000005}, {"type": "map_at_3", "value": 37.120999999999995}, {"type": "map_at_5", "value": 38.912}, {"type": "mrr_at_1", "value": 34.585}, {"type": "mrr_at_10", "value": 45.068000000000005}, {"type": "mrr_at_100", "value": 45.93}, {"type": "mrr_at_1000", "value": 45.974}, {"type": "mrr_at_3", "value": 42.26}, {"type": "mrr_at_5", "value": 43.742}, {"type": "ndcg_at_1", "value": 34.585}, {"type": "ndcg_at_10", "value": 47.519}, {"type": "ndcg_at_100", "value": 53.102000000000004}, {"type": "ndcg_at_1000", "value": 54.949999999999996}, {"type": "ndcg_at_3", "value": 41.719}, {"type": "ndcg_at_5", "value": 44.17}, {"type": "precision_at_1", "value": 34.585}, {"type": "precision_at_10", "value": 9.368}, {"type": "precision_at_100", "value": 1.7870000000000001}, {"type": "precision_at_1000", "value": 0.254}, {"type": "precision_at_3", "value": 19.895}, {"type": "precision_at_5", "value": 14.506}, {"type": "recall_at_1", "value": 28.815}, {"type": "recall_at_10", "value": 61.414}, {"type": "recall_at_100", "value": 85.922}, {"type": "recall_at_1000", "value": 97.15}, {"type": "recall_at_3", "value": 45.076}, {"type": "recall_at_5", "value": 51.271}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWordpressRetrieval", "type": "cqadupstack/wordpress", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 24.298000000000002}, {"type": "map_at_10", "value": 32.889}, {"type": "map_at_100", "value": 33.989999999999995}, {"type": "map_at_1000", "value": 34.074}, {"type": "map_at_3", "value": 29.873}, {"type": "map_at_5", "value": 31.539}, {"type": "mrr_at_1", "value": 26.433}, {"type": "mrr_at_10", "value": 34.937000000000005}, {"type": "mrr_at_100", "value": 35.914}, {"type": "mrr_at_1000", "value": 35.96}, {"type": "mrr_at_3", "value": 32.286}, {"type": "mrr_at_5", "value": 33.663}, {"type": "ndcg_at_1", "value": 26.433}, {"type": "ndcg_at_10", "value": 38.173}, {"type": "ndcg_at_100", "value": 43.884}, {"type": "ndcg_at_1000", "value": 45.916000000000004}, {"type": "ndcg_at_3", "value": 32.419}, {"type": "ndcg_at_5", "value": 35.092}, {"type": "precision_at_1", "value": 26.433}, {"type": "precision_at_10", "value": 6.1}, {"type": "precision_at_100", "value": 0.963}, {"type": "precision_at_1000", "value": 0.126}, {"type": "precision_at_3", "value": 13.802}, {"type": "precision_at_5", "value": 9.871}, {"type": "recall_at_1", "value": 24.298000000000002}, {"type": "recall_at_10", "value": 52.554}, {"type": "recall_at_100", "value": 79.345}, {"type": "recall_at_1000", "value": 94.464}, {"type": "recall_at_3", "value": 37.036}, {"type": "recall_at_5", "value": 43.518}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 14.194999999999999}, {"type": "map_at_10", "value": 24.563}, {"type": "map_at_100", "value": 26.775}, {"type": "map_at_1000", "value": 26.965}, {"type": "map_at_3", "value": 19.983999999999998}, {"type": "map_at_5", "value": 22.24}, {"type": "mrr_at_1", "value": 31.661}, {"type": "mrr_at_10", "value": 44.804}, {"type": "mrr_at_100", "value": 45.655}, {"type": "mrr_at_1000", "value": 45.678000000000004}, {"type": "mrr_at_3", "value": 41.292}, {"type": "mrr_at_5", "value": 43.468}, {"type": "ndcg_at_1", "value": 31.661}, {"type": "ndcg_at_10", "value": 34.271}, {"type": "ndcg_at_100", "value": 42.04}, {"type": "ndcg_at_1000", "value": 45.101}, {"type": "ndcg_at_3", "value": 27.529999999999998}, {"type": "ndcg_at_5", "value": 29.862}, {"type": "precision_at_1", "value": 31.661}, {"type": "precision_at_10", "value": 10.925}, {"type": "precision_at_100", "value": 1.92}, {"type": "precision_at_1000", "value": 0.25}, {"type": "precision_at_3", "value": 20.456}, {"type": "precision_at_5", "value": 16.012999999999998}, {"type": "recall_at_1", "value": 14.194999999999999}, {"type": "recall_at_10", "value": 41.388999999999996}, {"type": "recall_at_100", "value": 67.58800000000001}, {"type": "recall_at_1000", "value": 84.283}, {"type": "recall_at_3", "value": 25.089}, {"type": "recall_at_5", "value": 31.642}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 9.898}, {"type": "map_at_10", "value": 23.226}, {"type": "map_at_100", "value": 33.372}, {"type": "map_at_1000", "value": 35.407}, {"type": "map_at_3", "value": 15.892999999999999}, {"type": "map_at_5", "value": 18.747}, {"type": "mrr_at_1", "value": 73.5}, {"type": "mrr_at_10", "value": 80.404}, {"type": "mrr_at_100", "value": 80.671}, {"type": "mrr_at_1000", "value": 80.676}, {"type": "mrr_at_3", "value": 78.958}, {"type": "mrr_at_5", "value": 79.683}, {"type": "ndcg_at_1", "value": 62.0}, {"type": "ndcg_at_10", "value": 48.337}, {"type": "ndcg_at_100", "value": 53.474}, {"type": "ndcg_at_1000", "value": 60.999}, {"type": "ndcg_at_3", "value": 52.538}, {"type": "ndcg_at_5", "value": 49.659}, {"type": "precision_at_1", "value": 73.5}, {"type": "precision_at_10", "value": 39.25}, {"type": "precision_at_100", "value": 12.4}, {"type": "precision_at_1000", "value": 2.4459999999999997}, {"type": "precision_at_3", "value": 56.333}, {"type": "precision_at_5", "value": 48.15}, {"type": "recall_at_1", "value": 9.898}, {"type": "recall_at_10", "value": 29.511}, {"type": "recall_at_100", "value": 60.45700000000001}, {"type": "recall_at_1000", "value": 84.47200000000001}, {"type": "recall_at_3", "value": 17.064}, {"type": "recall_at_5", "value": 21.258}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 51.19999999999999}, {"type": "f1", "value": 46.23854137552949}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 80.093}, {"type": "map_at_10", "value": 87.139}, {"type": "map_at_100", "value": 87.333}, {"type": "map_at_1000", "value": 87.344}, {"type": "map_at_3", "value": 86.395}, {"type": "map_at_5", "value": 86.866}, {"type": "mrr_at_1", "value": 86.36399999999999}, {"type": "mrr_at_10", "value": 91.867}, {"type": "mrr_at_100", "value": 91.906}, {"type": "mrr_at_1000", "value": 91.90700000000001}, {"type": "mrr_at_3", "value": 91.484}, {"type": "mrr_at_5", "value": 91.759}, {"type": "ndcg_at_1", "value": 86.36399999999999}, {"type": "ndcg_at_10", "value": 90.197}, {"type": "ndcg_at_100", "value": 90.819}, {"type": "ndcg_at_1000", "value": 91.01599999999999}, {"type": "ndcg_at_3", "value": 89.166}, {"type": "ndcg_at_5", "value": 89.74}, {"type": "precision_at_1", "value": 86.36399999999999}, {"type": "precision_at_10", "value": 10.537}, {"type": "precision_at_100", "value": 1.106}, {"type": "precision_at_1000", "value": 0.11399999999999999}, {"type": "precision_at_3", "value": 33.608}, {"type": "precision_at_5", "value": 20.618}, {"type": "recall_at_1", "value": 80.093}, {"type": "recall_at_10", "value": 95.003}, {"type": "recall_at_100", "value": 97.328}, {"type": "recall_at_1000", "value": 98.485}, {"type": "recall_at_3", "value": 92.072}, {"type": "recall_at_5", "value": 93.661}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 29.063}, {"type": "map_at_10", "value": 47.113}, {"type": "map_at_100", "value": 49.294}, {"type": "map_at_1000", "value": 49.422}, {"type": "map_at_3", "value": 40.955000000000005}, {"type": "map_at_5", "value": 44.5}, {"type": "mrr_at_1", "value": 55.401}, {"type": "mrr_at_10", "value": 62.99400000000001}, {"type": "mrr_at_100", "value": 63.63999999999999}, {"type": "mrr_at_1000", "value": 63.661}, {"type": "mrr_at_3", "value": 61.034}, {"type": "mrr_at_5", "value": 62.253}, {"type": "ndcg_at_1", "value": 55.401}, {"type": "ndcg_at_10", "value": 55.332}, {"type": "ndcg_at_100", "value": 61.931000000000004}, {"type": "ndcg_at_1000", "value": 63.841}, {"type": "ndcg_at_3", "value": 50.92}, {"type": "ndcg_at_5", "value": 52.525}, {"type": "precision_at_1", "value": 55.401}, {"type": "precision_at_10", "value": 15.262}, {"type": "precision_at_100", "value": 2.231}, {"type": "precision_at_1000", "value": 0.256}, {"type": "precision_at_3", "value": 33.848}, {"type": "precision_at_5", "value": 25.031}, {"type": "recall_at_1", "value": 29.063}, {"type": "recall_at_10", "value": 62.498}, {"type": "recall_at_100", "value": 85.86}, {"type": "recall_at_1000", "value": 97.409}, {"type": "recall_at_3", "value": 45.472}, {"type": "recall_at_5", "value": 53.344}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 37.205}, {"type": "map_at_10", "value": 64.19399999999999}, {"type": "map_at_100", "value": 65.183}, {"type": "map_at_1000", "value": 65.23299999999999}, {"type": "map_at_3", "value": 60.239}, {"type": "map_at_5", "value": 62.695}, {"type": "mrr_at_1", "value": 74.409}, {"type": "mrr_at_10", "value": 80.84}, {"type": "mrr_at_100", "value": 81.10199999999999}, {"type": "mrr_at_1000", "value": 81.109}, {"type": "mrr_at_3", "value": 79.739}, {"type": "mrr_at_5", "value": 80.46600000000001}, {"type": "ndcg_at_1", "value": 74.409}, {"type": "ndcg_at_10", "value": 71.757}, {"type": "ndcg_at_100", "value": 75.152}, {"type": "ndcg_at_1000", "value": 76.098}, {"type": "ndcg_at_3", "value": 66.174}, {"type": "ndcg_at_5", "value": 69.283}, {"type": "precision_at_1", "value": 74.409}, {"type": "precision_at_10", "value": 15.503}, {"type": "precision_at_100", "value": 1.8110000000000002}, {"type": "precision_at_1000", "value": 0.194}, {"type": "precision_at_3", "value": 43.457}, {"type": "precision_at_5", "value": 28.532000000000004}, {"type": "recall_at_1", "value": 37.205}, {"type": "recall_at_10", "value": 77.515}, {"type": "recall_at_100", "value": 90.56}, {"type": "recall_at_1000", "value": 96.759}, {"type": "recall_at_3", "value": 65.18599999999999}, {"type": "recall_at_5", "value": 71.33}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 82.9448}, {"type": "ap", "value": 78.25923353099166}, {"type": "f1", "value": 82.86422040179993}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 22.834}, {"type": "map_at_10", "value": 35.85}, {"type": "map_at_100", "value": 37.013}, {"type": "map_at_1000", "value": 37.056}, {"type": "map_at_3", "value": 31.613000000000003}, {"type": "map_at_5", "value": 34.113}, {"type": "mrr_at_1", "value": 23.424}, {"type": "mrr_at_10", "value": 36.398}, {"type": "mrr_at_100", "value": 37.498}, {"type": "mrr_at_1000", "value": 37.534}, {"type": "mrr_at_3", "value": 32.275999999999996}, {"type": "mrr_at_5", "value": 34.705000000000005}, {"type": "ndcg_at_1", "value": 23.424}, {"type": "ndcg_at_10", "value": 43.236999999999995}, {"type": "ndcg_at_100", "value": 48.776}, {"type": "ndcg_at_1000", "value": 49.778}, {"type": "ndcg_at_3", "value": 34.692}, {"type": "ndcg_at_5", "value": 39.119}, {"type": "precision_at_1", "value": 23.424}, {"type": "precision_at_10", "value": 6.918}, {"type": "precision_at_100", "value": 0.9690000000000001}, {"type": "precision_at_1000", "value": 0.105}, {"type": "precision_at_3", "value": 14.881}, {"type": "precision_at_5", "value": 11.183}, {"type": "recall_at_1", "value": 22.834}, {"type": "recall_at_10", "value": 66.03999999999999}, {"type": "recall_at_100", "value": 91.532}, {"type": "recall_at_1000", "value": 99.068}, {"type": "recall_at_3", "value": 42.936}, {"type": "recall_at_5", "value": 53.539}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 96.1377108983128}, {"type": "f1", "value": 95.87034720246666}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 86.10579115367078}, {"type": "f1", "value": 70.20810321445228}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 79.80497646267652}, {"type": "f1", "value": 77.32475274059293}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 81.52320107599192}, {"type": "f1", "value": 81.22312939311655}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 30.709106678767018}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 32.95879128399585}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 32.67476691128679}, {"type": "mrr", "value": 33.921654478513986}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 7.223}, {"type": "map_at_10", "value": 15.992999999999999}, {"type": "map_at_100", "value": 21.09}, {"type": "map_at_1000", "value": 22.822}, {"type": "map_at_3", "value": 11.475}, {"type": "map_at_5", "value": 13.501}, {"type": "mrr_at_1", "value": 53.251000000000005}, {"type": "mrr_at_10", "value": 61.878}, {"type": "mrr_at_100", "value": 62.307}, {"type": "mrr_at_1000", "value": 62.342}, {"type": "mrr_at_3", "value": 60.01}, {"type": "mrr_at_5", "value": 61.202}, {"type": "ndcg_at_1", "value": 51.702999999999996}, {"type": "ndcg_at_10", "value": 41.833999999999996}, {"type": "ndcg_at_100", "value": 39.061}, {"type": "ndcg_at_1000", "value": 47.397}, {"type": "ndcg_at_3", "value": 47.083000000000006}, {"type": "ndcg_at_5", "value": 44.722}, {"type": "precision_at_1", "value": 53.251000000000005}, {"type": "precision_at_10", "value": 31.3}, {"type": "precision_at_100", "value": 10.254000000000001}, {"type": "precision_at_1000", "value": 2.338}, {"type": "precision_at_3", "value": 43.756}, {"type": "precision_at_5", "value": 38.824}, {"type": "recall_at_1", "value": 7.223}, {"type": "recall_at_10", "value": 20.529}, {"type": "recall_at_100", "value": 39.818}, {"type": "recall_at_1000", "value": 70.152}, {"type": "recall_at_3", "value": 12.666}, {"type": "recall_at_5", "value": 15.798000000000002}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 38.847}, {"type": "map_at_10", "value": 56.255}, {"type": "map_at_100", "value": 57.019}, {"type": "map_at_1000", "value": 57.03}, {"type": "map_at_3", "value": 51.665000000000006}, {"type": "map_at_5", "value": 54.543}, {"type": "mrr_at_1", "value": 43.801}, {"type": "mrr_at_10", "value": 58.733999999999995}, {"type": "mrr_at_100", "value": 59.206}, {"type": "mrr_at_1000", "value": 59.21300000000001}, {"type": "mrr_at_3", "value": 55.266999999999996}, {"type": "mrr_at_5", "value": 57.449}, {"type": "ndcg_at_1", "value": 43.772}, {"type": "ndcg_at_10", "value": 64.213}, {"type": "ndcg_at_100", "value": 67.13}, {"type": "ndcg_at_1000", "value": 67.368}, {"type": "ndcg_at_3", "value": 55.977}, {"type": "ndcg_at_5", "value": 60.597}, {"type": "precision_at_1", "value": 43.772}, {"type": "precision_at_10", "value": 10.272}, {"type": "precision_at_100", "value": 1.193}, {"type": "precision_at_1000", "value": 0.121}, {"type": "precision_at_3", "value": 25.261}, {"type": "precision_at_5", "value": 17.885}, {"type": "recall_at_1", "value": 38.847}, {"type": "recall_at_10", "value": 85.76700000000001}, {"type": "recall_at_100", "value": 98.054}, {"type": "recall_at_1000", "value": 99.812}, {"type": "recall_at_3", "value": 64.82}, {"type": "recall_at_5", "value": 75.381}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 68.77}, {"type": "map_at_10", "value": 83.195}, {"type": "map_at_100", "value": 83.869}, {"type": "map_at_1000", "value": 83.883}, {"type": "map_at_3", "value": 80.04599999999999}, {"type": "map_at_5", "value": 82.011}, {"type": "mrr_at_1", "value": 79.2}, {"type": "mrr_at_10", "value": 85.942}, {"type": "mrr_at_100", "value": 86.063}, {"type": "mrr_at_1000", "value": 86.064}, {"type": "mrr_at_3", "value": 84.82}, {"type": "mrr_at_5", "value": 85.56899999999999}, {"type": "ndcg_at_1", "value": 79.17999999999999}, {"type": "ndcg_at_10", "value": 87.161}, {"type": "ndcg_at_100", "value": 88.465}, {"type": "ndcg_at_1000", "value": 88.553}, {"type": "ndcg_at_3", "value": 83.958}, {"type": "ndcg_at_5", "value": 85.699}, {"type": "precision_at_1", "value": 79.17999999999999}, {"type": "precision_at_10", "value": 13.401}, {"type": "precision_at_100", "value": 1.54}, {"type": "precision_at_1000", "value": 0.157}, {"type": "precision_at_3", "value": 36.903000000000006}, {"type": "precision_at_5", "value": 24.404}, {"type": "recall_at_1", "value": 68.77}, {"type": "recall_at_10", "value": 95.132}, {"type": "recall_at_100", "value": 99.58200000000001}, {"type": "recall_at_1000", "value": 99.997}, {"type": "recall_at_3", "value": 86.119}, {"type": "recall_at_5", "value": 90.932}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 61.7204049654583}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 63.98164986883849}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 5.443}, {"type": "map_at_10", "value": 13.86}, {"type": "map_at_100", "value": 16.496}, {"type": "map_at_1000", "value": 16.836000000000002}, {"type": "map_at_3", "value": 9.661}, {"type": "map_at_5", "value": 11.745}, {"type": "mrr_at_1", "value": 26.8}, {"type": "mrr_at_10", "value": 37.777}, {"type": "mrr_at_100", "value": 38.928000000000004}, {"type": "mrr_at_1000", "value": 38.967}, {"type": "mrr_at_3", "value": 34.083000000000006}, {"type": "mrr_at_5", "value": 36.308}, {"type": "ndcg_at_1", "value": 26.8}, {"type": "ndcg_at_10", "value": 22.961000000000002}, {"type": "ndcg_at_100", "value": 32.582}, {"type": "ndcg_at_1000", "value": 37.972}, {"type": "ndcg_at_3", "value": 21.292}, {"type": "ndcg_at_5", "value": 18.945999999999998}, {"type": "precision_at_1", "value": 26.8}, {"type": "precision_at_10", "value": 12.06}, {"type": "precision_at_100", "value": 2.593}, {"type": "precision_at_1000", "value": 0.388}, {"type": "precision_at_3", "value": 19.900000000000002}, {"type": "precision_at_5", "value": 16.84}, {"type": "recall_at_1", "value": 5.443}, {"type": "recall_at_10", "value": 24.445}, {"type": "recall_at_100", "value": 52.602000000000004}, {"type": "recall_at_1000", "value": 78.767}, {"type": "recall_at_3", "value": 12.098}, {"type": "recall_at_5", "value": 17.077}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_spearman", "value": 83.9379272617096}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_spearman", "value": 79.26752176661364}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_spearman", "value": 84.8327309083665}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_spearman", "value": 82.9394255552954}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_spearman", "value": 88.08995363382608}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_spearman", "value": 86.53522220099619}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_spearman", "value": 89.57796559847532}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_spearman", "value": 67.66598855577894}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_spearman", "value": 88.0472708354572}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 86.04689157650684}, {"type": "mrr", "value": 96.51889958262507}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 62.827999999999996}, {"type": "map_at_10", "value": 73.54899999999999}, {"type": "map_at_100", "value": 73.892}, {"type": "map_at_1000", "value": 73.901}, {"type": "map_at_3", "value": 70.663}, {"type": "map_at_5", "value": 72.449}, {"type": "mrr_at_1", "value": 66.0}, {"type": "mrr_at_10", "value": 74.554}, {"type": "mrr_at_100", "value": 74.81700000000001}, {"type": "mrr_at_1000", "value": 74.82600000000001}, {"type": "mrr_at_3", "value": 72.667}, {"type": "mrr_at_5", "value": 73.717}, {"type": "ndcg_at_1", "value": 66.0}, {"type": "ndcg_at_10", "value": 78.218}, {"type": "ndcg_at_100", "value": 79.706}, {"type": "ndcg_at_1000", "value": 79.925}, {"type": "ndcg_at_3", "value": 73.629}, {"type": "ndcg_at_5", "value": 75.89}, {"type": "precision_at_1", "value": 66.0}, {"type": "precision_at_10", "value": 10.333}, {"type": "precision_at_100", "value": 1.113}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 28.889}, {"type": "precision_at_5", "value": 19.067}, {"type": "recall_at_1", "value": 62.827999999999996}, {"type": "recall_at_10", "value": 91.533}, {"type": "recall_at_100", "value": 98.333}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 79.0}, {"type": "recall_at_5", "value": 84.68900000000001}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.8019801980198}, {"type": "cos_sim_ap", "value": 95.09301057928796}, {"type": "cos_sim_f1", "value": 89.71193415637859}, {"type": "cos_sim_precision", "value": 92.37288135593221}, {"type": "cos_sim_recall", "value": 87.2}, {"type": "dot_accuracy", "value": 99.72079207920792}, {"type": "dot_ap", "value": 92.77707970155015}, {"type": "dot_f1", "value": 85.88588588588588}, {"type": "dot_precision", "value": 85.97194388777555}, {"type": "dot_recall", "value": 85.8}, {"type": "euclidean_accuracy", "value": 99.7980198019802}, {"type": "euclidean_ap", "value": 95.04124481520121}, {"type": "euclidean_f1", "value": 89.61693548387096}, {"type": "euclidean_precision", "value": 90.34552845528455}, {"type": "euclidean_recall", "value": 88.9}, {"type": "manhattan_accuracy", "value": 99.7960396039604}, {"type": "manhattan_ap", "value": 95.02691504694813}, {"type": "manhattan_f1", "value": 89.60321446509292}, {"type": "manhattan_precision", "value": 90.0100908173562}, {"type": "manhattan_recall", "value": 89.2}, {"type": "max_accuracy", "value": 99.8019801980198}, {"type": "max_ap", "value": 95.09301057928796}, {"type": "max_f1", "value": 89.71193415637859}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 72.74124969197169}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 32.262798307863996}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 54.823414217790464}, {"type": "mrr", "value": 55.557133838383834}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 31.01226930465494}, {"type": "cos_sim_spearman", "value": 30.9368445798007}, {"type": "dot_pearson", "value": 30.204833368654533}, {"type": "dot_spearman", "value": 30.438900411966618}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.22699999999999998}, {"type": "map_at_10", "value": 2.0420000000000003}, {"type": "map_at_100", "value": 13.33}, {"type": "map_at_1000", "value": 33.627}, {"type": "map_at_3", "value": 0.639}, {"type": "map_at_5", "value": 1.056}, {"type": "mrr_at_1", "value": 84.0}, {"type": "mrr_at_10", "value": 91.167}, {"type": "mrr_at_100", "value": 91.167}, {"type": "mrr_at_1000", "value": 91.167}, {"type": "mrr_at_3", "value": 90.667}, {"type": "mrr_at_5", "value": 91.167}, {"type": "ndcg_at_1", "value": 82.0}, {"type": "ndcg_at_10", "value": 80.337}, {"type": "ndcg_at_100", "value": 65.852}, {"type": "ndcg_at_1000", "value": 59.821000000000005}, {"type": "ndcg_at_3", "value": 81.061}, {"type": "ndcg_at_5", "value": 81.396}, {"type": "precision_at_1", "value": 84.0}, {"type": "precision_at_10", "value": 85.0}, {"type": "precision_at_100", "value": 67.75999999999999}, {"type": "precision_at_1000", "value": 26.272000000000002}, {"type": "precision_at_3", "value": 85.333}, {"type": "precision_at_5", "value": 86.4}, {"type": "recall_at_1", "value": 0.22699999999999998}, {"type": "recall_at_10", "value": 2.241}, {"type": "recall_at_100", "value": 16.478}, {"type": "recall_at_1000", "value": 56.442}, {"type": "recall_at_3", "value": 0.672}, {"type": "recall_at_5", "value": 1.143}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 1.836}, {"type": "map_at_10", "value": 8.536000000000001}, {"type": "map_at_100", "value": 14.184}, {"type": "map_at_1000", "value": 15.885}, {"type": "map_at_3", "value": 3.7359999999999998}, {"type": "map_at_5", "value": 5.253}, {"type": "mrr_at_1", "value": 22.448999999999998}, {"type": "mrr_at_10", "value": 34.77}, {"type": "mrr_at_100", "value": 36.18}, {"type": "mrr_at_1000", "value": 36.18}, {"type": "mrr_at_3", "value": 30.612000000000002}, {"type": "mrr_at_5", "value": 32.449}, {"type": "ndcg_at_1", "value": 20.408}, {"type": "ndcg_at_10", "value": 20.498}, {"type": "ndcg_at_100", "value": 33.354}, {"type": "ndcg_at_1000", "value": 45.699}, {"type": "ndcg_at_3", "value": 19.292}, {"type": "ndcg_at_5", "value": 19.541}, {"type": "precision_at_1", "value": 22.448999999999998}, {"type": "precision_at_10", "value": 19.387999999999998}, {"type": "precision_at_100", "value": 7.163}, {"type": "precision_at_1000", "value": 1.541}, {"type": "precision_at_3", "value": 19.728}, {"type": "precision_at_5", "value": 20.0}, {"type": "recall_at_1", "value": 1.836}, {"type": "recall_at_10", "value": 15.212}, {"type": "recall_at_100", "value": 45.364}, {"type": "recall_at_1000", "value": 83.64}, {"type": "recall_at_3", "value": 4.651000000000001}, {"type": "recall_at_5", "value": 7.736}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 70.5856}, {"type": "ap", "value": 14.297836125608864}, {"type": "f1", "value": 54.45458507465688}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 61.89869835880024}, {"type": "f1", "value": 62.15163526419782}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 56.408998393035446}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 88.78822197055493}, {"type": "cos_sim_ap", "value": 81.73234934293887}, {"type": "cos_sim_f1", "value": 74.16373812312898}, {"type": "cos_sim_precision", "value": 73.18263549961469}, {"type": "cos_sim_recall", "value": 75.17150395778364}, {"type": "dot_accuracy", "value": 87.85837754068069}, {"type": "dot_ap", "value": 79.69812660365871}, {"type": "dot_f1", "value": 72.52999744702579}, {"type": "dot_precision", "value": 70.25222551928783}, {"type": "dot_recall", "value": 74.96042216358839}, {"type": "euclidean_accuracy", "value": 88.74649818203493}, {"type": "euclidean_ap", "value": 81.47777928110055}, {"type": "euclidean_f1", "value": 74.1248097412481}, {"type": "euclidean_precision", "value": 71.37274059599413}, {"type": "euclidean_recall", "value": 77.0976253298153}, {"type": "manhattan_accuracy", "value": 88.7286165583835}, {"type": "manhattan_ap", "value": 81.47766386927232}, {"type": "manhattan_f1", "value": 74.16730231375541}, {"type": "manhattan_precision", "value": 71.56526005888125}, {"type": "manhattan_recall", "value": 76.96569920844327}, {"type": "max_accuracy", "value": 88.78822197055493}, {"type": "max_ap", "value": 81.73234934293887}, {"type": "max_f1", "value": 74.16730231375541}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 89.30026778437536}, {"type": "cos_sim_ap", "value": 86.56353001037664}, {"type": "cos_sim_f1", "value": 79.359197907585}, {"type": "cos_sim_precision", "value": 75.12379642365887}, {"type": "cos_sim_recall", "value": 84.10070834616569}, {"type": "dot_accuracy", "value": 88.8539604921023}, {"type": "dot_ap", "value": 85.44601003294055}, {"type": "dot_f1", "value": 78.20008094484713}, {"type": "dot_precision", "value": 74.88549080403072}, {"type": "dot_recall", "value": 81.82168155220204}, {"type": "euclidean_accuracy", "value": 89.25369658865992}, {"type": "euclidean_ap", "value": 86.46965679550075}, {"type": "euclidean_f1", "value": 79.16785612332285}, {"type": "euclidean_precision", "value": 73.77627028465017}, {"type": "euclidean_recall", "value": 85.4096088697259}, {"type": "manhattan_accuracy", "value": 89.26727985407692}, {"type": "manhattan_ap", "value": 86.46460344566123}, {"type": "manhattan_f1", "value": 79.1723543358}, {"type": "manhattan_precision", "value": 74.20875420875421}, {"type": "manhattan_recall", "value": 84.84755158607946}, {"type": "max_accuracy", "value": 89.30026778437536}, {"type": "max_ap", "value": 86.56353001037664}, {"type": "max_f1", "value": 79.359197907585}]}]}]} | McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised | null | [
"peft",
"safetensors",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"arxiv:2404.05961",
"license:mit",
"model-index",
"region:us"
] | null | 2024-04-30T02:35:26+00:00 | [
"2404.05961"
] | [
"en"
] | TAGS
#peft #safetensors #text-embedding #embeddings #information-retrieval #beir #text-classification #language-model #text-clustering #text-semantic-similarity #text-evaluation #text-reranking #feature-extraction #sentence-similarity #Sentence Similarity #natural_questions #ms_marco #fever #hotpot_qa #mteb #en #arxiv-2404.05961 #license-mit #model-index #region-us
|
# LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.
- Repository: URL
- Paper: URL
## Installation
## Usage
## Questions
If you have any question about the code, feel free to email Parishad ('parishad.behnamghader@URL') and Vaibhav ('vaibhav.adlakha@URL'). | [
"# LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders\n\n> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.\n- Repository: URL\n- Paper: URL",
"## Installation",
"## Usage",
"## Questions\nIf you have any question about the code, feel free to email Parishad ('parishad.behnamghader@URL') and Vaibhav ('vaibhav.adlakha@URL')."
] | [
"TAGS\n#peft #safetensors #text-embedding #embeddings #information-retrieval #beir #text-classification #language-model #text-clustering #text-semantic-similarity #text-evaluation #text-reranking #feature-extraction #sentence-similarity #Sentence Similarity #natural_questions #ms_marco #fever #hotpot_qa #mteb #en #arxiv-2404.05961 #license-mit #model-index #region-us \n",
"# LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders\n\n> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.\n- Repository: URL\n- Paper: URL",
"## Installation",
"## Usage",
"## Questions\nIf you have any question about the code, feel free to email Parishad ('parishad.behnamghader@URL') and Vaibhav ('vaibhav.adlakha@URL')."
] | [
111,
105,
3,
3,
55
] | [
"TAGS\n#peft #safetensors #text-embedding #embeddings #information-retrieval #beir #text-classification #language-model #text-clustering #text-semantic-similarity #text-evaluation #text-reranking #feature-extraction #sentence-similarity #Sentence Similarity #natural_questions #ms_marco #fever #hotpot_qa #mteb #en #arxiv-2404.05961 #license-mit #model-index #region-us \n# LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders\n\n> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.\n- Repository: URL\n- Paper: URL## Installation## Usage## Questions\nIf you have any question about the code, feel free to email Parishad ('parishad.behnamghader@URL') and Vaibhav ('vaibhav.adlakha@URL')."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4564
- F1 Score: 0.7980
- Accuracy: 0.7996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5364 | 0.92 | 200 | 0.5215 | 0.7560 | 0.7586 |
| 0.4948 | 1.83 | 400 | 0.5052 | 0.7660 | 0.7686 |
| 0.4846 | 2.75 | 600 | 0.4944 | 0.7759 | 0.7778 |
| 0.4841 | 3.67 | 800 | 0.4906 | 0.7740 | 0.7761 |
| 0.4697 | 4.59 | 1000 | 0.4808 | 0.7865 | 0.7876 |
| 0.4656 | 5.5 | 1200 | 0.4815 | 0.7797 | 0.7818 |
| 0.4649 | 6.42 | 1400 | 0.4822 | 0.7845 | 0.7861 |
| 0.4598 | 7.34 | 1600 | 0.4864 | 0.7849 | 0.7876 |
| 0.4549 | 8.26 | 1800 | 0.4851 | 0.7814 | 0.7833 |
| 0.4612 | 9.17 | 2000 | 0.4770 | 0.7853 | 0.7876 |
| 0.4564 | 10.09 | 2200 | 0.4957 | 0.7749 | 0.7792 |
| 0.4526 | 11.01 | 2400 | 0.4733 | 0.7906 | 0.7927 |
| 0.4536 | 11.93 | 2600 | 0.4669 | 0.7903 | 0.7916 |
| 0.4496 | 12.84 | 2800 | 0.4735 | 0.7900 | 0.7921 |
| 0.4462 | 13.76 | 3000 | 0.4792 | 0.7915 | 0.7942 |
| 0.445 | 14.68 | 3200 | 0.4707 | 0.7925 | 0.7939 |
| 0.4462 | 15.6 | 3400 | 0.4699 | 0.7889 | 0.7910 |
| 0.4433 | 16.51 | 3600 | 0.4768 | 0.7922 | 0.7942 |
| 0.4438 | 17.43 | 3800 | 0.4649 | 0.7917 | 0.7930 |
| 0.4401 | 18.35 | 4000 | 0.4676 | 0.7912 | 0.7930 |
| 0.4412 | 19.27 | 4200 | 0.4757 | 0.7896 | 0.7913 |
| 0.4397 | 20.18 | 4400 | 0.4778 | 0.7887 | 0.7910 |
| 0.435 | 21.1 | 4600 | 0.4743 | 0.7910 | 0.7927 |
| 0.4381 | 22.02 | 4800 | 0.4741 | 0.7896 | 0.7913 |
| 0.4369 | 22.94 | 5000 | 0.4660 | 0.7913 | 0.7933 |
| 0.4355 | 23.85 | 5200 | 0.4656 | 0.7911 | 0.7927 |
| 0.4326 | 24.77 | 5400 | 0.4789 | 0.7857 | 0.7884 |
| 0.4347 | 25.69 | 5600 | 0.4708 | 0.7890 | 0.7910 |
| 0.4317 | 26.61 | 5800 | 0.4671 | 0.7909 | 0.7924 |
| 0.4329 | 27.52 | 6000 | 0.4792 | 0.7873 | 0.7899 |
| 0.4342 | 28.44 | 6200 | 0.4713 | 0.7896 | 0.7913 |
| 0.429 | 29.36 | 6400 | 0.4712 | 0.7887 | 0.7910 |
| 0.4286 | 30.28 | 6600 | 0.4734 | 0.7878 | 0.7904 |
| 0.4308 | 31.19 | 6800 | 0.4683 | 0.7929 | 0.7942 |
| 0.4317 | 32.11 | 7000 | 0.4692 | 0.7884 | 0.7904 |
| 0.4273 | 33.03 | 7200 | 0.4705 | 0.7895 | 0.7913 |
| 0.4279 | 33.94 | 7400 | 0.4733 | 0.7875 | 0.7896 |
| 0.4277 | 34.86 | 7600 | 0.4733 | 0.7864 | 0.7887 |
| 0.4274 | 35.78 | 7800 | 0.4687 | 0.7930 | 0.7944 |
| 0.4291 | 36.7 | 8000 | 0.4684 | 0.7884 | 0.7904 |
| 0.4271 | 37.61 | 8200 | 0.4729 | 0.7865 | 0.7893 |
| 0.4268 | 38.53 | 8400 | 0.4691 | 0.7895 | 0.7913 |
| 0.4245 | 39.45 | 8600 | 0.4715 | 0.7859 | 0.7881 |
| 0.4226 | 40.37 | 8800 | 0.4767 | 0.7884 | 0.7907 |
| 0.4282 | 41.28 | 9000 | 0.4701 | 0.7897 | 0.7919 |
| 0.4216 | 42.2 | 9200 | 0.4703 | 0.7880 | 0.7899 |
| 0.4218 | 43.12 | 9400 | 0.4721 | 0.7883 | 0.7901 |
| 0.426 | 44.04 | 9600 | 0.4703 | 0.7880 | 0.7901 |
| 0.4224 | 44.95 | 9800 | 0.4726 | 0.7896 | 0.7919 |
| 0.4236 | 45.87 | 10000 | 0.4713 | 0.7889 | 0.7910 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T02:36:00+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K36me3-seqsight\_16384\_512\_56M-L1\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4564
* F1 Score: 0.7980
* Accuracy: 0.7996
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4559
- F1 Score: 0.8045
- Accuracy: 0.8065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5255 | 0.92 | 200 | 0.5330 | 0.7521 | 0.7566 |
| 0.4811 | 1.83 | 400 | 0.4920 | 0.7744 | 0.7772 |
| 0.47 | 2.75 | 600 | 0.4790 | 0.7822 | 0.7841 |
| 0.4688 | 3.67 | 800 | 0.4757 | 0.7875 | 0.7893 |
| 0.4537 | 4.59 | 1000 | 0.4691 | 0.7954 | 0.7967 |
| 0.4473 | 5.5 | 1200 | 0.4725 | 0.7914 | 0.7936 |
| 0.4478 | 6.42 | 1400 | 0.4741 | 0.7948 | 0.7959 |
| 0.4406 | 7.34 | 1600 | 0.4777 | 0.7829 | 0.7861 |
| 0.4343 | 8.26 | 1800 | 0.4757 | 0.7884 | 0.7904 |
| 0.4389 | 9.17 | 2000 | 0.4662 | 0.7885 | 0.7910 |
| 0.4343 | 10.09 | 2200 | 0.4969 | 0.7705 | 0.7758 |
| 0.4285 | 11.01 | 2400 | 0.4684 | 0.7901 | 0.7921 |
| 0.4279 | 11.93 | 2600 | 0.4602 | 0.7940 | 0.7947 |
| 0.4246 | 12.84 | 2800 | 0.4694 | 0.7860 | 0.7887 |
| 0.4196 | 13.76 | 3000 | 0.4813 | 0.7828 | 0.7864 |
| 0.4161 | 14.68 | 3200 | 0.4710 | 0.7918 | 0.7939 |
| 0.4141 | 15.6 | 3400 | 0.4650 | 0.7945 | 0.7959 |
| 0.4138 | 16.51 | 3600 | 0.4832 | 0.7901 | 0.7927 |
| 0.4107 | 17.43 | 3800 | 0.4799 | 0.7887 | 0.7916 |
| 0.4075 | 18.35 | 4000 | 0.4638 | 0.7936 | 0.7953 |
| 0.4062 | 19.27 | 4200 | 0.4874 | 0.7941 | 0.7962 |
| 0.4037 | 20.18 | 4400 | 0.4863 | 0.7916 | 0.7936 |
| 0.3987 | 21.1 | 4600 | 0.4773 | 0.7965 | 0.7976 |
| 0.3985 | 22.02 | 4800 | 0.4745 | 0.7940 | 0.7956 |
| 0.3972 | 22.94 | 5000 | 0.4818 | 0.7888 | 0.7919 |
| 0.3948 | 23.85 | 5200 | 0.4807 | 0.7968 | 0.7987 |
| 0.389 | 24.77 | 5400 | 0.4960 | 0.7899 | 0.7927 |
| 0.391 | 25.69 | 5600 | 0.4787 | 0.7974 | 0.7993 |
| 0.3885 | 26.61 | 5800 | 0.4725 | 0.7962 | 0.7976 |
| 0.3884 | 27.52 | 6000 | 0.4987 | 0.7897 | 0.7921 |
| 0.3868 | 28.44 | 6200 | 0.4780 | 0.7996 | 0.8010 |
| 0.3799 | 29.36 | 6400 | 0.4758 | 0.7952 | 0.7967 |
| 0.3805 | 30.28 | 6600 | 0.4910 | 0.7925 | 0.7950 |
| 0.3827 | 31.19 | 6800 | 0.4769 | 0.7972 | 0.7985 |
| 0.381 | 32.11 | 7000 | 0.4820 | 0.7954 | 0.7973 |
| 0.3746 | 33.03 | 7200 | 0.4932 | 0.7949 | 0.7964 |
| 0.3771 | 33.94 | 7400 | 0.4834 | 0.7944 | 0.7964 |
| 0.3739 | 34.86 | 7600 | 0.4916 | 0.7901 | 0.7924 |
| 0.3735 | 35.78 | 7800 | 0.4882 | 0.7996 | 0.8007 |
| 0.3757 | 36.7 | 8000 | 0.4846 | 0.7970 | 0.7987 |
| 0.3713 | 37.61 | 8200 | 0.4923 | 0.7930 | 0.7953 |
| 0.3712 | 38.53 | 8400 | 0.4950 | 0.7972 | 0.7990 |
| 0.3691 | 39.45 | 8600 | 0.4936 | 0.7936 | 0.7959 |
| 0.3675 | 40.37 | 8800 | 0.5022 | 0.7935 | 0.7956 |
| 0.37 | 41.28 | 9000 | 0.4927 | 0.7945 | 0.7964 |
| 0.3662 | 42.2 | 9200 | 0.4894 | 0.7957 | 0.7976 |
| 0.3663 | 43.12 | 9400 | 0.4940 | 0.7948 | 0.7964 |
| 0.3676 | 44.04 | 9600 | 0.4935 | 0.7947 | 0.7967 |
| 0.3665 | 44.95 | 9800 | 0.4951 | 0.7949 | 0.7970 |
| 0.365 | 45.87 | 10000 | 0.4952 | 0.7950 | 0.7970 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T02:36:20+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K36me3-seqsight\_16384\_512\_56M-L8\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4559
* F1 Score: 0.8045
* Accuracy: 0.8065
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null | This is a copy of zero123-xl model from https://zero123.cs.columbia.edu/, please refer [their spaces: https://huggingface.co/cvlab](https://huggingface.co/cvlab) for more information.
| {} | kealiu/zero123-xl | null | [
"region:us"
] | null | 2024-04-30T02:37:23+00:00 | [] | [] | TAGS
#region-us
| This is a copy of zero123-xl model from URL please refer their spaces: URL for more information.
| [] | [
"TAGS\n#region-us \n"
] | [
5
] | [
"TAGS\n#region-us \n"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA26
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3788 | 0.09 | 10 | 0.1706 |
| 0.1647 | 0.18 | 20 | 0.1607 |
| 0.1505 | 0.27 | 30 | 0.1637 |
| 0.1577 | 0.36 | 40 | 0.1513 |
| 0.1517 | 0.45 | 50 | 0.1517 |
| 0.1528 | 0.54 | 60 | 0.1497 |
| 0.1516 | 0.63 | 70 | 0.1478 |
| 0.1492 | 0.73 | 80 | 0.1647 |
| 0.1507 | 0.82 | 90 | 0.1472 |
| 0.1498 | 0.91 | 100 | 0.1525 |
| 0.1516 | 1.0 | 110 | 0.1518 |
| 0.1484 | 1.09 | 120 | 0.1495 |
| 0.1494 | 1.18 | 130 | 0.1516 |
| 0.1487 | 1.27 | 140 | 0.1508 |
| 0.15 | 1.36 | 150 | 0.1485 |
| 0.1454 | 1.45 | 160 | 0.1474 |
| 0.1458 | 1.54 | 170 | 0.1476 |
| 0.1482 | 1.63 | 180 | 0.1462 |
| 0.1472 | 1.72 | 190 | 0.1505 |
| 0.146 | 1.81 | 200 | 0.1486 |
| 0.1495 | 1.9 | 210 | 0.1498 |
| 0.1471 | 1.99 | 220 | 0.1510 |
| 0.1478 | 2.08 | 230 | 0.1477 |
| 0.1413 | 2.18 | 240 | 0.1460 |
| 0.1425 | 2.27 | 250 | 0.1473 |
| 0.1432 | 2.36 | 260 | 0.1473 |
| 0.1408 | 2.45 | 270 | 0.1445 |
| 0.1384 | 2.54 | 280 | 0.1428 |
| 0.1378 | 2.63 | 290 | 0.1420 |
| 0.1396 | 2.72 | 300 | 0.1387 |
| 0.1376 | 2.81 | 310 | 0.1378 |
| 0.1365 | 2.9 | 320 | 0.1367 |
| 0.1368 | 2.99 | 330 | 0.1367 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA26", "results": []}]} | Litzy619/O0428HMA26 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T02:38:51+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA26
==========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1367
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 80
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
35,
160,
5,
47
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0428HMA25
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3732 | 0.09 | 10 | 0.1777 |
| 0.1625 | 0.18 | 20 | 0.1554 |
| 0.1492 | 0.27 | 30 | 0.1655 |
| 0.1576 | 0.36 | 40 | 0.1524 |
| 0.1518 | 0.45 | 50 | 0.1576 |
| 0.1514 | 0.54 | 60 | 0.1505 |
| 0.1536 | 0.63 | 70 | 0.1484 |
| 0.1497 | 0.73 | 80 | 0.1585 |
| 0.1499 | 0.82 | 90 | 0.1483 |
| 0.1498 | 0.91 | 100 | 0.1500 |
| 0.1518 | 1.0 | 110 | 0.1494 |
| 0.1477 | 1.09 | 120 | 0.1481 |
| 0.1458 | 1.18 | 130 | 0.1525 |
| 0.1472 | 1.27 | 140 | 0.1484 |
| 0.1487 | 1.36 | 150 | 0.1500 |
| 0.1448 | 1.45 | 160 | 0.1456 |
| 0.1363 | 1.54 | 170 | 0.1287 |
| 0.0851 | 1.63 | 180 | 0.0912 |
| 0.152 | 1.72 | 190 | 0.1214 |
| 0.1799 | 1.81 | 200 | 0.0633 |
| 0.0692 | 1.9 | 210 | 0.0533 |
| 0.0482 | 1.99 | 220 | 0.0345 |
| 0.0448 | 2.08 | 230 | 0.0370 |
| 0.0304 | 2.18 | 240 | 0.0237 |
| 0.0484 | 2.27 | 250 | 0.0524 |
| 0.0422 | 2.36 | 260 | 0.0289 |
| 0.0264 | 2.45 | 270 | 0.0223 |
| 0.0174 | 2.54 | 280 | 0.0199 |
| 0.0267 | 2.63 | 290 | 0.0188 |
| 0.0237 | 2.72 | 300 | 0.0185 |
| 0.018 | 2.81 | 310 | 0.0179 |
| 0.0219 | 2.9 | 320 | 0.0180 |
| 0.0228 | 2.99 | 330 | 0.0179 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428HMA25", "results": []}]} | Litzy619/O0428HMA25 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T02:38:51+00:00 | [] | [] | TAGS
#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
| O0428HMA25
==========
This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0179
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 80
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
35,
160,
5,
47
] | [
"TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
text-generation | transformers | <a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a>
# Llama-3 8B Gradient Instruct 1048k
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected].
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.

**Approach:**
- [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
- Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below)
**Infra:**
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
**Data:**
For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
**Progressive Training Details:**
| | 65K | 262K | 524k | 1048k |
|------------------------|-----------|-----------|-----------|-----------|
| Initialize From | LLaMA-3 8B| 65K | 262K | 524k |
| Sequence Length 2^N | 16 | 18 | 19 | 20 |
| RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B |
| Batch Size | 1 | 1 | 16 | 16 |
| Gradient Accumulation Steps | 32 | 16 | 1 | 1 |
| Steps | 30 | 24 | 50 | 50 |
| Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 |
| Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
| # GPUs | 8 | 32 | 512 | 512 |
| GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
| Minutes to Train (Wall)| 202 | 555 | 61 | 87 |
**Quants**:
- [GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF)
- [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit)
## The Gradient AI Team
https://gradient.ai/
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
## Contact Us
Drop an email to [[email protected]](mailto:[email protected])
## References
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] https://github.com/jzhang38/EasyContext
----
# Base Model
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos | {"language": ["en"], "license": "llama3", "tags": ["meta", "llama-3"], "pipeline_tag": "text-generation"} | blockblockblock/Llama-3-8B-Instruct-Gradient-1048k-bpw4.6-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"conversational",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T02:39:19+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| [<img src="URL width="200"/>](URL)
Llama-3 8B Gradient Instruct 1048k
==================================
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@URL.
For more info see our End-to-end development service for custom LLMs and AI systems
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from Crusoe Energy. It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
!image/png
Approach:
* meta-llama/Meta-Llama-3-8B-Instruct as the base
* NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
* Progressive training on increasing context lengths, similar to Large World Model [2] (See details below)
Infra:
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on Crusoe Energy high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
Data:
For training data, we generate long contexts by augmenting SlimPajama.
Progressive Training Details:
Quants:
* GGUF
* MLX-4bit
The Gradient AI Team
--------------------
URL
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
Contact Us
----------
Drop an email to contact@URL
References
----------
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] URL
---
Base Model
==========
Model Details
-------------
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date April 18, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: URL
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
Intended Use
------------
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
How to use
----------
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original 'llama3' codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.
#### Transformers pipeline
#### Transformers AutoModelForCausalLM
### Use with 'llama3'
Please, follow the instructions in the repository
To download Original checkpoints, see the example command below leveraging 'huggingface-cli':
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
Hardware and Software
---------------------
Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.
CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
-------------
Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
Benchmarks
----------
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.
### Base pretrained models
### Instruction tuned models
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
Safety
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
Refusals
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL
#### Critical risks
CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### Cyber Security
We have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.
### Child Safety
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.
Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.
Ethical Considerations and Limitations
--------------------------------------
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at URL
instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {URL
}
Contributors
------------
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
| [
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.",
"#### Transformers pipeline",
"#### Transformers AutoModelForCausalLM",
"### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.",
"### Base pretrained models",
"### Instruction tuned models",
"### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.",
"#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.",
"#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL",
"#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).",
"### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.",
"### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.",
"### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] | [
52,
42,
6,
13,
429,
8,
6,
270,
280,
72,
115,
118,
126,
2136
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #meta #llama-3 #conversational #en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Use with transformers\n\n\nYou can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the 'generate()' function. Let's see examples of both.#### Transformers pipeline#### Transformers AutoModelForCausalLM### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโs sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.### Base pretrained models### Instruction tuned models### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Metaโs cybersecurity safety eval suite, measuring Llama 3โs propensity to suggest insecure code when used as a coding assistant, and Llama 3โs propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos"
] |
text2text-generation | transformers | test | {} | shrms/chart_korea | null | [
"transformers",
"pytorch",
"pix2struct",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:39:59+00:00 | [] | [] | TAGS
#transformers #pytorch #pix2struct #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
| test | [] | [
"TAGS\n#transformers #pytorch #pix2struct #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
35
] | [
"TAGS\n#transformers #pytorch #pix2struct #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | peft |
Used MonsterAPI for Finetuning
# Model Card for eswardivi/llamathon_v1
Model is Finetuned on microsoft/orca-math-word-problems-200k using MonsterAPI No finetuning
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
| {"license": "apache-2.0", "library_name": "peft", "datasets": ["microsoft/orca-math-word-problems-200k"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"} | eswardivi/llamathon_v1 | null | [
"peft",
"safetensors",
"dataset:microsoft/orca-math-word-problems-200k",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T02:41:48+00:00 | [] | [] | TAGS
#peft #safetensors #dataset-microsoft/orca-math-word-problems-200k #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-apache-2.0 #region-us
|
Used MonsterAPI for Finetuning
# Model Card for eswardivi/llamathon_v1
Model is Finetuned on microsoft/orca-math-word-problems-200k using MonsterAPI No finetuning
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
| [
"# Model Card for eswardivi/llamathon_v1\n\nModel is Finetuned on microsoft/orca-math-word-problems-200k using MonsterAPI No finetuning",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data"
] | [
"TAGS\n#peft #safetensors #dataset-microsoft/orca-math-word-problems-200k #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-apache-2.0 #region-us \n",
"# Model Card for eswardivi/llamathon_v1\n\nModel is Finetuned on microsoft/orca-math-word-problems-200k using MonsterAPI No finetuning",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data"
] | [
59,
43,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5
] | [
"TAGS\n#peft #safetensors #dataset-microsoft/orca-math-word-problems-200k #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-apache-2.0 #region-us \n# Model Card for eswardivi/llamathon_v1\n\nModel is Finetuned on microsoft/orca-math-word-problems-200k using MonsterAPI No finetuning# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data"
] |
null | transformers |
# Uploaded model
- **Developed by:** MilaNguyen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | MilaNguyen/sft_summary_1 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:42:26+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: MilaNguyen
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: MilaNguyen\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: MilaNguyen\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
62,
78
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: MilaNguyen\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_16384_512_56M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5081
- F1 Score: 0.7989
- Accuracy: 0.8007
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5189 | 0.92 | 200 | 0.5160 | 0.7631 | 0.7663 |
| 0.4705 | 1.83 | 400 | 0.4825 | 0.7808 | 0.7830 |
| 0.4597 | 2.75 | 600 | 0.4726 | 0.7854 | 0.7870 |
| 0.4576 | 3.67 | 800 | 0.4671 | 0.7873 | 0.7887 |
| 0.4392 | 4.59 | 1000 | 0.4655 | 0.7908 | 0.7927 |
| 0.4323 | 5.5 | 1200 | 0.4652 | 0.7897 | 0.7919 |
| 0.4306 | 6.42 | 1400 | 0.4715 | 0.7922 | 0.7936 |
| 0.4219 | 7.34 | 1600 | 0.4993 | 0.7756 | 0.7804 |
| 0.4112 | 8.26 | 1800 | 0.4653 | 0.7934 | 0.7950 |
| 0.414 | 9.17 | 2000 | 0.4644 | 0.7888 | 0.7913 |
| 0.4047 | 10.09 | 2200 | 0.4850 | 0.7863 | 0.7899 |
| 0.3971 | 11.01 | 2400 | 0.4722 | 0.7904 | 0.7919 |
| 0.3902 | 11.93 | 2600 | 0.4661 | 0.7965 | 0.7970 |
| 0.3828 | 12.84 | 2800 | 0.4784 | 0.7893 | 0.7919 |
| 0.3766 | 13.76 | 3000 | 0.5001 | 0.7854 | 0.7887 |
| 0.3686 | 14.68 | 3200 | 0.5093 | 0.7906 | 0.7933 |
| 0.3576 | 15.6 | 3400 | 0.5030 | 0.7949 | 0.7970 |
| 0.3589 | 16.51 | 3600 | 0.5288 | 0.7869 | 0.7907 |
| 0.3511 | 17.43 | 3800 | 0.5205 | 0.7884 | 0.7916 |
| 0.3449 | 18.35 | 4000 | 0.4984 | 0.7894 | 0.7904 |
| 0.335 | 19.27 | 4200 | 0.5494 | 0.7889 | 0.7921 |
| 0.3309 | 20.18 | 4400 | 0.5330 | 0.8007 | 0.8019 |
| 0.324 | 21.1 | 4600 | 0.5325 | 0.7927 | 0.7933 |
| 0.3162 | 22.02 | 4800 | 0.5123 | 0.7969 | 0.7976 |
| 0.3118 | 22.94 | 5000 | 0.5269 | 0.7857 | 0.7876 |
| 0.3057 | 23.85 | 5200 | 0.5393 | 0.7936 | 0.7956 |
| 0.2982 | 24.77 | 5400 | 0.5480 | 0.7946 | 0.7959 |
| 0.2969 | 25.69 | 5600 | 0.5749 | 0.7926 | 0.7939 |
| 0.2901 | 26.61 | 5800 | 0.5522 | 0.7880 | 0.7896 |
| 0.288 | 27.52 | 6000 | 0.6007 | 0.7845 | 0.7873 |
| 0.284 | 28.44 | 6200 | 0.5484 | 0.7868 | 0.7884 |
| 0.277 | 29.36 | 6400 | 0.5689 | 0.7852 | 0.7870 |
| 0.2698 | 30.28 | 6600 | 0.6168 | 0.7842 | 0.7873 |
| 0.2756 | 31.19 | 6800 | 0.5753 | 0.7870 | 0.7878 |
| 0.2662 | 32.11 | 7000 | 0.6208 | 0.7857 | 0.7876 |
| 0.2629 | 33.03 | 7200 | 0.5987 | 0.7879 | 0.7896 |
| 0.2587 | 33.94 | 7400 | 0.6090 | 0.7861 | 0.7878 |
| 0.2521 | 34.86 | 7600 | 0.6288 | 0.7790 | 0.7810 |
| 0.2526 | 35.78 | 7800 | 0.6044 | 0.7897 | 0.7907 |
| 0.2498 | 36.7 | 8000 | 0.6139 | 0.7806 | 0.7824 |
| 0.2459 | 37.61 | 8200 | 0.6365 | 0.7844 | 0.7864 |
| 0.2421 | 38.53 | 8400 | 0.6772 | 0.7825 | 0.7853 |
| 0.2462 | 39.45 | 8600 | 0.6503 | 0.7889 | 0.7907 |
| 0.2373 | 40.37 | 8800 | 0.6569 | 0.7867 | 0.7887 |
| 0.239 | 41.28 | 9000 | 0.6492 | 0.7790 | 0.7807 |
| 0.2371 | 42.2 | 9200 | 0.6445 | 0.7821 | 0.7838 |
| 0.2328 | 43.12 | 9400 | 0.6469 | 0.7839 | 0.7856 |
| 0.2345 | 44.04 | 9600 | 0.6582 | 0.7807 | 0.7827 |
| 0.2314 | 44.95 | 9800 | 0.6627 | 0.7807 | 0.7830 |
| 0.2302 | 45.87 | 10000 | 0.6613 | 0.7827 | 0.7847 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_16384_512_56M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_16384_512_56M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T02:43:11+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_EMP\_H3K36me3-seqsight\_16384\_512\_56M-L32\_f
===================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5081
* F1 Score: 0.7989
* Accuracy: 0.8007
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | armaniii/llama-3-8b-argument-detection | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:43:27+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | kyounghyun/eeve-levware-k-240430 | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T02:43:54+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
50,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_16384_512_56M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5589
- F1 Score: 0.7250
- Accuracy: 0.7259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6253 | 3.92 | 200 | 0.5778 | 0.6926 | 0.6926 |
| 0.5902 | 7.84 | 400 | 0.5768 | 0.6921 | 0.6938 |
| 0.5723 | 11.76 | 600 | 0.5556 | 0.7210 | 0.7210 |
| 0.5572 | 15.69 | 800 | 0.5455 | 0.7308 | 0.7321 |
| 0.5417 | 19.61 | 1000 | 0.5452 | 0.7346 | 0.7346 |
| 0.5289 | 23.53 | 1200 | 0.5381 | 0.7325 | 0.7346 |
| 0.5132 | 27.45 | 1400 | 0.5405 | 0.7281 | 0.7284 |
| 0.5028 | 31.37 | 1600 | 0.5324 | 0.7294 | 0.7296 |
| 0.4924 | 35.29 | 1800 | 0.5291 | 0.7321 | 0.7321 |
| 0.4815 | 39.22 | 2000 | 0.5237 | 0.7284 | 0.7284 |
| 0.472 | 43.14 | 2200 | 0.5350 | 0.7317 | 0.7321 |
| 0.4673 | 47.06 | 2400 | 0.5240 | 0.7309 | 0.7309 |
| 0.4596 | 50.98 | 2600 | 0.5353 | 0.7293 | 0.7296 |
| 0.4557 | 54.9 | 2800 | 0.5245 | 0.7343 | 0.7346 |
| 0.4465 | 58.82 | 3000 | 0.5219 | 0.7331 | 0.7333 |
| 0.4476 | 62.75 | 3200 | 0.5298 | 0.7334 | 0.7333 |
| 0.4376 | 66.67 | 3400 | 0.5273 | 0.7370 | 0.7370 |
| 0.4305 | 70.59 | 3600 | 0.5242 | 0.7358 | 0.7358 |
| 0.4273 | 74.51 | 3800 | 0.5299 | 0.7383 | 0.7383 |
| 0.4202 | 78.43 | 4000 | 0.5254 | 0.7418 | 0.7420 |
| 0.421 | 82.35 | 4200 | 0.5231 | 0.7522 | 0.7531 |
| 0.4095 | 86.27 | 4400 | 0.5391 | 0.7395 | 0.7395 |
| 0.4062 | 90.2 | 4600 | 0.5302 | 0.7428 | 0.7432 |
| 0.4021 | 94.12 | 4800 | 0.5313 | 0.7445 | 0.7444 |
| 0.3992 | 98.04 | 5000 | 0.5226 | 0.7565 | 0.7568 |
| 0.3951 | 101.96 | 5200 | 0.5339 | 0.7494 | 0.7494 |
| 0.3893 | 105.88 | 5400 | 0.5386 | 0.7444 | 0.7444 |
| 0.3842 | 109.8 | 5600 | 0.5358 | 0.7519 | 0.7519 |
| 0.3848 | 113.73 | 5800 | 0.5319 | 0.7519 | 0.7519 |
| 0.3784 | 117.65 | 6000 | 0.5389 | 0.7482 | 0.7481 |
| 0.373 | 121.57 | 6200 | 0.5481 | 0.7481 | 0.7481 |
| 0.3738 | 125.49 | 6400 | 0.5382 | 0.7506 | 0.7506 |
| 0.3641 | 129.41 | 6600 | 0.5452 | 0.7494 | 0.7494 |
| 0.3638 | 133.33 | 6800 | 0.5474 | 0.7556 | 0.7556 |
| 0.3581 | 137.25 | 7000 | 0.5569 | 0.7505 | 0.7506 |
| 0.3558 | 141.18 | 7200 | 0.5497 | 0.7494 | 0.7494 |
| 0.3538 | 145.1 | 7400 | 0.5555 | 0.7482 | 0.7481 |
| 0.3533 | 149.02 | 7600 | 0.5548 | 0.7506 | 0.7506 |
| 0.3481 | 152.94 | 7800 | 0.5495 | 0.7519 | 0.7519 |
| 0.3476 | 156.86 | 8000 | 0.5569 | 0.7482 | 0.7481 |
| 0.3453 | 160.78 | 8200 | 0.5602 | 0.7444 | 0.7444 |
| 0.3439 | 164.71 | 8400 | 0.5622 | 0.7481 | 0.7481 |
| 0.3433 | 168.63 | 8600 | 0.5544 | 0.7482 | 0.7481 |
| 0.3376 | 172.55 | 8800 | 0.5592 | 0.7531 | 0.7531 |
| 0.3405 | 176.47 | 9000 | 0.5619 | 0.7519 | 0.7519 |
| 0.3299 | 180.39 | 9200 | 0.5606 | 0.7544 | 0.7543 |
| 0.3387 | 184.31 | 9400 | 0.5643 | 0.7518 | 0.7519 |
| 0.3341 | 188.24 | 9600 | 0.5666 | 0.7505 | 0.7506 |
| 0.3358 | 192.16 | 9800 | 0.5641 | 0.7518 | 0.7519 |
| 0.3335 | 196.08 | 10000 | 0.5653 | 0.7494 | 0.7494 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_mouse_0-seqsight_16384_512_56M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_16384_512_56M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T02:44:53+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_mouse\_0-seqsight\_16384\_512\_56M-L1\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5589
* F1 Score: 0.7250
* Accuracy: 0.7259
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
sentence-similarity | peft |
> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.
- **Repository:** https://github.com/McGill-NLP/llm2vec
- **Paper:** https://arxiv.org/abs/2404.05961
## Installation
```bash
pip install llm2vec
```
## Usage
```python
from llm2vec import LLM2Vec
import torch
from transformers import AutoTokenizer, AutoModel, AutoConfig
from peft import PeftModel
# Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model.
tokenizer = AutoTokenizer.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp"
)
config = AutoConfig.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", trust_remote_code=True
)
model = AutoModel.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp",
trust_remote_code=True,
config=config,
torch_dtype=torch.bfloat16,
device_map="cuda" if torch.cuda.is_available() else "cpu",
)
model = PeftModel.from_pretrained(
model,
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp",
)
model = model.merge_and_unload() # This can take several minutes on cpu
# Loading unsupervised SimCSE model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + SimCSE (LoRA).
model = PeftModel.from_pretrained(
model, "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-unsup-simcse"
)
# Wrapper for encoding and pooling operations
l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512)
# Encoding queries using instructions
instruction = (
"Given a web search query, retrieve relevant passages that answer the query:"
)
queries = [
[instruction, "how much protein should a female eat"],
[instruction, "summit define"],
]
q_reps = l2v.encode(queries)
# Encoding documents. Instruction are not required for documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
d_reps = l2v.encode(documents)
# Compute cosine similarity
q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1)
d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1)
cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1))
print(cos_sim)
"""
tensor([[0.6522, 0.1891],
[0.1162, 0.3457]])
"""
```
## Questions
If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`). | {"language": ["en"], "license": "mit", "library_name": "peft", "tags": ["text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "text-reranking", "feature-extraction", "sentence-similarity", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb"], "pipeline_tag": "sentence-similarity", "model-index": [{"name": "LLM2Vec-Meta-Llama-3-unsupervised", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "accuracy", "value": 75.70149253731343}, {"type": "ap", "value": 40.824269118508354}, {"type": "f1", "value": 70.55918234479084}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonPolarityClassification", "type": "mteb/amazon_polarity", "config": "default", "split": "test", "revision": "e2d317d38cd51312af73b3d32a06d1a08b442046"}, "metrics": [{"type": "accuracy", "value": 80.6812}, {"type": "ap", "value": 76.63327889516552}, {"type": "f1", "value": 80.5276613226382}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonReviewsClassification (en)", "type": "mteb/amazon_reviews_multi", "config": "en", "split": "test", "revision": "1399c76144fd37290681b995c656ef9b2e06e26d"}, "metrics": [{"type": "accuracy", "value": 40.002}, {"type": "f1", "value": 39.67277678335084}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ArguAna", "type": "arguana", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 26.173999999999996}, {"type": "map_at_10", "value": 42.548}, {"type": "map_at_100", "value": 43.492999999999995}, {"type": "map_at_1000", "value": 43.5}, {"type": "map_at_3", "value": 37.376}, {"type": "map_at_5", "value": 40.359}, {"type": "mrr_at_1", "value": 27.24}, {"type": "mrr_at_10", "value": 42.945}, {"type": "mrr_at_100", "value": 43.89}, {"type": "mrr_at_1000", "value": 43.897000000000006}, {"type": "mrr_at_3", "value": 37.779}, {"type": "mrr_at_5", "value": 40.755}, {"type": "ndcg_at_1", "value": 26.173999999999996}, {"type": "ndcg_at_10", "value": 51.731}, {"type": "ndcg_at_100", "value": 55.684999999999995}, {"type": "ndcg_at_1000", "value": 55.86}, {"type": "ndcg_at_3", "value": 41.122}, {"type": "ndcg_at_5", "value": 46.491}, {"type": "precision_at_1", "value": 26.173999999999996}, {"type": "precision_at_10", "value": 8.108}, {"type": "precision_at_100", "value": 0.9820000000000001}, {"type": "precision_at_1000", "value": 0.1}, {"type": "precision_at_3", "value": 17.330000000000002}, {"type": "precision_at_5", "value": 13.001}, {"type": "recall_at_1", "value": 26.173999999999996}, {"type": "recall_at_10", "value": 81.081}, {"type": "recall_at_100", "value": 98.222}, {"type": "recall_at_1000", "value": 99.57300000000001}, {"type": "recall_at_3", "value": 51.991}, {"type": "recall_at_5", "value": 65.007}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringP2P", "type": "mteb/arxiv-clustering-p2p", "config": "default", "split": "test", "revision": "a122ad7f3f0291bf49cc6f4d32aa80929df69d5d"}, "metrics": [{"type": "v_measure", "value": 49.215974795578546}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB ArxivClusteringS2S", "type": "mteb/arxiv-clustering-s2s", "config": "default", "split": "test", "revision": "f910caf1a6075f7329cdf8c1a6135696f37dbd53"}, "metrics": [{"type": "v_measure", "value": 41.71067780141813}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB AskUbuntuDupQuestions", "type": "mteb/askubuntudupquestions-reranking", "config": "default", "split": "test", "revision": "2000358ca161889fa9c082cb41daa8dcfb161a54"}, "metrics": [{"type": "map", "value": 57.15639347603191}, {"type": "mrr", "value": 71.4509959108297}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "revision": "d3fb88f8f02e40887cd149695127462bbcf29b4a"}, "metrics": [{"type": "cos_sim_spearman", "value": 84.67361609277127}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB Banking77Classification", "type": "mteb/banking77", "config": "default", "split": "test", "revision": "0fd18e25b25c072e09e0d92ab615fda904d66300"}, "metrics": [{"type": "accuracy", "value": 84.76623376623375}, {"type": "f1", "value": 84.70041172334481}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringP2P", "type": "mteb/biorxiv-clustering-p2p", "config": "default", "split": "test", "revision": "65b79d1d13f80053f67aca9498d9402c2d9f1f40"}, "metrics": [{"type": "v_measure", "value": 38.39251163108548}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB BiorxivClusteringS2S", "type": "mteb/biorxiv-clustering-s2s", "config": "default", "split": "test", "revision": "258694dd0231531bc1fd9de6ceb52a0853c6d908"}, "metrics": [{"type": "v_measure", "value": 31.30501371807517}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackAndroidRetrieval", "type": "cqadupstack/android", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 26.409}, {"type": "map_at_10", "value": 36.925000000000004}, {"type": "map_at_100", "value": 38.651}, {"type": "map_at_1000", "value": 38.798}, {"type": "map_at_3", "value": 33.437}, {"type": "map_at_5", "value": 35.506}, {"type": "mrr_at_1", "value": 33.763}, {"type": "mrr_at_10", "value": 43.442}, {"type": "mrr_at_100", "value": 44.339}, {"type": "mrr_at_1000", "value": 44.391000000000005}, {"type": "mrr_at_3", "value": 40.749}, {"type": "mrr_at_5", "value": 42.408}, {"type": "ndcg_at_1", "value": 33.763}, {"type": "ndcg_at_10", "value": 43.486999999999995}, {"type": "ndcg_at_100", "value": 49.71}, {"type": "ndcg_at_1000", "value": 51.81}, {"type": "ndcg_at_3", "value": 38.586}, {"type": "ndcg_at_5", "value": 41.074}, {"type": "precision_at_1", "value": 33.763}, {"type": "precision_at_10", "value": 8.798}, {"type": "precision_at_100", "value": 1.544}, {"type": "precision_at_1000", "value": 0.21}, {"type": "precision_at_3", "value": 19.361}, {"type": "precision_at_5", "value": 14.335}, {"type": "recall_at_1", "value": 26.409}, {"type": "recall_at_10", "value": 55.352999999999994}, {"type": "recall_at_100", "value": 81.66799999999999}, {"type": "recall_at_1000", "value": 95.376}, {"type": "recall_at_3", "value": 40.304}, {"type": "recall_at_5", "value": 47.782000000000004}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackEnglishRetrieval", "type": "cqadupstack/english", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 26.6}, {"type": "map_at_10", "value": 36.42}, {"type": "map_at_100", "value": 37.628}, {"type": "map_at_1000", "value": 37.767}, {"type": "map_at_3", "value": 33.553}, {"type": "map_at_5", "value": 35.118}, {"type": "mrr_at_1", "value": 34.394999999999996}, {"type": "mrr_at_10", "value": 42.586}, {"type": "mrr_at_100", "value": 43.251}, {"type": "mrr_at_1000", "value": 43.303000000000004}, {"type": "mrr_at_3", "value": 40.297}, {"type": "mrr_at_5", "value": 41.638}, {"type": "ndcg_at_1", "value": 34.394999999999996}, {"type": "ndcg_at_10", "value": 42.05}, {"type": "ndcg_at_100", "value": 46.371}, {"type": "ndcg_at_1000", "value": 48.76}, {"type": "ndcg_at_3", "value": 37.936}, {"type": "ndcg_at_5", "value": 39.827}, {"type": "precision_at_1", "value": 34.394999999999996}, {"type": "precision_at_10", "value": 8.268}, {"type": "precision_at_100", "value": 1.355}, {"type": "precision_at_1000", "value": 0.186}, {"type": "precision_at_3", "value": 18.726000000000003}, {"type": "precision_at_5", "value": 13.541}, {"type": "recall_at_1", "value": 26.6}, {"type": "recall_at_10", "value": 51.529}, {"type": "recall_at_100", "value": 70.038}, {"type": "recall_at_1000", "value": 85.67}, {"type": "recall_at_3", "value": 39.448}, {"type": "recall_at_5", "value": 44.6}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGamingRetrieval", "type": "cqadupstack/gaming", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 31.863000000000003}, {"type": "map_at_10", "value": 43.733}, {"type": "map_at_100", "value": 45.005}, {"type": "map_at_1000", "value": 45.074}, {"type": "map_at_3", "value": 40.593}, {"type": "map_at_5", "value": 42.272}, {"type": "mrr_at_1", "value": 37.555}, {"type": "mrr_at_10", "value": 47.532999999999994}, {"type": "mrr_at_100", "value": 48.431999999999995}, {"type": "mrr_at_1000", "value": 48.47}, {"type": "mrr_at_3", "value": 44.901}, {"type": "mrr_at_5", "value": 46.274}, {"type": "ndcg_at_1", "value": 37.555}, {"type": "ndcg_at_10", "value": 49.789}, {"type": "ndcg_at_100", "value": 55.059999999999995}, {"type": "ndcg_at_1000", "value": 56.434}, {"type": "ndcg_at_3", "value": 44.238}, {"type": "ndcg_at_5", "value": 46.698}, {"type": "precision_at_1", "value": 37.555}, {"type": "precision_at_10", "value": 8.257}, {"type": "precision_at_100", "value": 1.189}, {"type": "precision_at_1000", "value": 0.136}, {"type": "precision_at_3", "value": 20.23}, {"type": "precision_at_5", "value": 13.868}, {"type": "recall_at_1", "value": 31.863000000000003}, {"type": "recall_at_10", "value": 64.188}, {"type": "recall_at_100", "value": 87.02600000000001}, {"type": "recall_at_1000", "value": 96.761}, {"type": "recall_at_3", "value": 48.986000000000004}, {"type": "recall_at_5", "value": 55.177}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackGisRetrieval", "type": "cqadupstack/gis", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 15.964}, {"type": "map_at_10", "value": 22.746}, {"type": "map_at_100", "value": 23.704}, {"type": "map_at_1000", "value": 23.82}, {"type": "map_at_3", "value": 20.5}, {"type": "map_at_5", "value": 21.836}, {"type": "mrr_at_1", "value": 17.740000000000002}, {"type": "mrr_at_10", "value": 24.634}, {"type": "mrr_at_100", "value": 25.535999999999998}, {"type": "mrr_at_1000", "value": 25.628}, {"type": "mrr_at_3", "value": 22.429}, {"type": "mrr_at_5", "value": 23.791}, {"type": "ndcg_at_1", "value": 17.740000000000002}, {"type": "ndcg_at_10", "value": 26.838}, {"type": "ndcg_at_100", "value": 31.985000000000003}, {"type": "ndcg_at_1000", "value": 35.289}, {"type": "ndcg_at_3", "value": 22.384}, {"type": "ndcg_at_5", "value": 24.726}, {"type": "precision_at_1", "value": 17.740000000000002}, {"type": "precision_at_10", "value": 4.35}, {"type": "precision_at_100", "value": 0.753}, {"type": "precision_at_1000", "value": 0.108}, {"type": "precision_at_3", "value": 9.754999999999999}, {"type": "precision_at_5", "value": 7.164}, {"type": "recall_at_1", "value": 15.964}, {"type": "recall_at_10", "value": 37.705}, {"type": "recall_at_100", "value": 61.94499999999999}, {"type": "recall_at_1000", "value": 87.646}, {"type": "recall_at_3", "value": 25.714}, {"type": "recall_at_5", "value": 31.402}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackMathematicaRetrieval", "type": "cqadupstack/mathematica", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 9.221}, {"type": "map_at_10", "value": 14.735000000000001}, {"type": "map_at_100", "value": 15.778}, {"type": "map_at_1000", "value": 15.9}, {"type": "map_at_3", "value": 12.791}, {"type": "map_at_5", "value": 13.703999999999999}, {"type": "mrr_at_1", "value": 12.438}, {"type": "mrr_at_10", "value": 18.353}, {"type": "mrr_at_100", "value": 19.285}, {"type": "mrr_at_1000", "value": 19.375}, {"type": "mrr_at_3", "value": 16.439}, {"type": "mrr_at_5", "value": 17.352999999999998}, {"type": "ndcg_at_1", "value": 12.438}, {"type": "ndcg_at_10", "value": 18.703}, {"type": "ndcg_at_100", "value": 24.104999999999997}, {"type": "ndcg_at_1000", "value": 27.366}, {"type": "ndcg_at_3", "value": 15.055}, {"type": "ndcg_at_5", "value": 16.42}, {"type": "precision_at_1", "value": 12.438}, {"type": "precision_at_10", "value": 3.818}, {"type": "precision_at_100", "value": 0.77}, {"type": "precision_at_1000", "value": 0.11800000000000001}, {"type": "precision_at_3", "value": 7.753}, {"type": "precision_at_5", "value": 5.622}, {"type": "recall_at_1", "value": 9.221}, {"type": "recall_at_10", "value": 27.461999999999996}, {"type": "recall_at_100", "value": 51.909000000000006}, {"type": "recall_at_1000", "value": 75.56}, {"type": "recall_at_3", "value": 17.046}, {"type": "recall_at_5", "value": 20.766000000000002}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackPhysicsRetrieval", "type": "cqadupstack/physics", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 22.828}, {"type": "map_at_10", "value": 33.166000000000004}, {"type": "map_at_100", "value": 34.618}, {"type": "map_at_1000", "value": 34.744}, {"type": "map_at_3", "value": 29.737000000000002}, {"type": "map_at_5", "value": 31.541000000000004}, {"type": "mrr_at_1", "value": 29.548000000000002}, {"type": "mrr_at_10", "value": 38.582}, {"type": "mrr_at_100", "value": 39.527}, {"type": "mrr_at_1000", "value": 39.577}, {"type": "mrr_at_3", "value": 35.884}, {"type": "mrr_at_5", "value": 37.413999999999994}, {"type": "ndcg_at_1", "value": 29.548000000000002}, {"type": "ndcg_at_10", "value": 39.397}, {"type": "ndcg_at_100", "value": 45.584}, {"type": "ndcg_at_1000", "value": 47.823}, {"type": "ndcg_at_3", "value": 33.717000000000006}, {"type": "ndcg_at_5", "value": 36.223}, {"type": "precision_at_1", "value": 29.548000000000002}, {"type": "precision_at_10", "value": 7.767}, {"type": "precision_at_100", "value": 1.2959999999999998}, {"type": "precision_at_1000", "value": 0.17099999999999999}, {"type": "precision_at_3", "value": 16.747}, {"type": "precision_at_5", "value": 12.203999999999999}, {"type": "recall_at_1", "value": 22.828}, {"type": "recall_at_10", "value": 52.583999999999996}, {"type": "recall_at_100", "value": 79.06400000000001}, {"type": "recall_at_1000", "value": 93.59100000000001}, {"type": "recall_at_3", "value": 36.671}, {"type": "recall_at_5", "value": 43.22}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackProgrammersRetrieval", "type": "cqadupstack/programmers", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 21.366}, {"type": "map_at_10", "value": 30.214000000000002}, {"type": "map_at_100", "value": 31.647}, {"type": "map_at_1000", "value": 31.763}, {"type": "map_at_3", "value": 27.234}, {"type": "map_at_5", "value": 28.801}, {"type": "mrr_at_1", "value": 26.256}, {"type": "mrr_at_10", "value": 35.299}, {"type": "mrr_at_100", "value": 36.284}, {"type": "mrr_at_1000", "value": 36.342}, {"type": "mrr_at_3", "value": 32.572}, {"type": "mrr_at_5", "value": 34.050999999999995}, {"type": "ndcg_at_1", "value": 26.256}, {"type": "ndcg_at_10", "value": 35.899}, {"type": "ndcg_at_100", "value": 41.983}, {"type": "ndcg_at_1000", "value": 44.481}, {"type": "ndcg_at_3", "value": 30.665}, {"type": "ndcg_at_5", "value": 32.879999999999995}, {"type": "precision_at_1", "value": 26.256}, {"type": "precision_at_10", "value": 6.804}, {"type": "precision_at_100", "value": 1.187}, {"type": "precision_at_1000", "value": 0.16}, {"type": "precision_at_3", "value": 14.84}, {"type": "precision_at_5", "value": 10.708}, {"type": "recall_at_1", "value": 21.366}, {"type": "recall_at_10", "value": 47.878}, {"type": "recall_at_100", "value": 73.245}, {"type": "recall_at_1000", "value": 90.623}, {"type": "recall_at_3", "value": 33.341}, {"type": "recall_at_5", "value": 39.198}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackRetrieval", "type": "mteb/cqadupstack", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 19.477166666666665}, {"type": "map_at_10", "value": 27.431416666666664}, {"type": "map_at_100", "value": 28.656000000000002}, {"type": "map_at_1000", "value": 28.787583333333338}, {"type": "map_at_3", "value": 24.85175}, {"type": "map_at_5", "value": 26.270166666666668}, {"type": "mrr_at_1", "value": 24.06841666666667}, {"type": "mrr_at_10", "value": 31.620000000000005}, {"type": "mrr_at_100", "value": 32.52283333333333}, {"type": "mrr_at_1000", "value": 32.59441666666667}, {"type": "mrr_at_3", "value": 29.328666666666663}, {"type": "mrr_at_5", "value": 30.620416666666667}, {"type": "ndcg_at_1", "value": 24.06841666666667}, {"type": "ndcg_at_10", "value": 32.404583333333335}, {"type": "ndcg_at_100", "value": 37.779500000000006}, {"type": "ndcg_at_1000", "value": 40.511583333333334}, {"type": "ndcg_at_3", "value": 27.994166666666665}, {"type": "ndcg_at_5", "value": 30.021749999999997}, {"type": "precision_at_1", "value": 24.06841666666667}, {"type": "precision_at_10", "value": 6.03725}, {"type": "precision_at_100", "value": 1.0500833333333337}, {"type": "precision_at_1000", "value": 0.14875000000000002}, {"type": "precision_at_3", "value": 13.419583333333335}, {"type": "precision_at_5", "value": 9.700666666666665}, {"type": "recall_at_1", "value": 19.477166666666665}, {"type": "recall_at_10", "value": 42.99441666666667}, {"type": "recall_at_100", "value": 66.787}, {"type": "recall_at_1000", "value": 86.18825000000001}, {"type": "recall_at_3", "value": 30.46366666666667}, {"type": "recall_at_5", "value": 35.83141666666667}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackStatsRetrieval", "type": "cqadupstack/stats", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 16.246}, {"type": "map_at_10", "value": 22.127}, {"type": "map_at_100", "value": 23.006}, {"type": "map_at_1000", "value": 23.125}, {"type": "map_at_3", "value": 20.308999999999997}, {"type": "map_at_5", "value": 21.139}, {"type": "mrr_at_1", "value": 19.631999999999998}, {"type": "mrr_at_10", "value": 24.884999999999998}, {"type": "mrr_at_100", "value": 25.704}, {"type": "mrr_at_1000", "value": 25.793}, {"type": "mrr_at_3", "value": 23.083000000000002}, {"type": "mrr_at_5", "value": 23.942}, {"type": "ndcg_at_1", "value": 19.631999999999998}, {"type": "ndcg_at_10", "value": 25.862000000000002}, {"type": "ndcg_at_100", "value": 30.436000000000003}, {"type": "ndcg_at_1000", "value": 33.638}, {"type": "ndcg_at_3", "value": 22.431}, {"type": "ndcg_at_5", "value": 23.677}, {"type": "precision_at_1", "value": 19.631999999999998}, {"type": "precision_at_10", "value": 4.417}, {"type": "precision_at_100", "value": 0.7270000000000001}, {"type": "precision_at_1000", "value": 0.109}, {"type": "precision_at_3", "value": 10.327}, {"type": "precision_at_5", "value": 7.147}, {"type": "recall_at_1", "value": 16.246}, {"type": "recall_at_10", "value": 34.869}, {"type": "recall_at_100", "value": 56.221}, {"type": "recall_at_1000", "value": 80.449}, {"type": "recall_at_3", "value": 24.83}, {"type": "recall_at_5", "value": 28.142}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackTexRetrieval", "type": "cqadupstack/tex", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 9.798}, {"type": "map_at_10", "value": 14.695}, {"type": "map_at_100", "value": 15.590000000000002}, {"type": "map_at_1000", "value": 15.726999999999999}, {"type": "map_at_3", "value": 13.004999999999999}, {"type": "map_at_5", "value": 13.861}, {"type": "mrr_at_1", "value": 12.939}, {"type": "mrr_at_10", "value": 18.218}, {"type": "mrr_at_100", "value": 18.998}, {"type": "mrr_at_1000", "value": 19.093}, {"type": "mrr_at_3", "value": 16.454}, {"type": "mrr_at_5", "value": 17.354}, {"type": "ndcg_at_1", "value": 12.939}, {"type": "ndcg_at_10", "value": 18.278}, {"type": "ndcg_at_100", "value": 22.709}, {"type": "ndcg_at_1000", "value": 26.064}, {"type": "ndcg_at_3", "value": 15.204}, {"type": "ndcg_at_5", "value": 16.416}, {"type": "precision_at_1", "value": 12.939}, {"type": "precision_at_10", "value": 3.768}, {"type": "precision_at_100", "value": 0.724}, {"type": "precision_at_1000", "value": 0.11800000000000001}, {"type": "precision_at_3", "value": 7.707999999999999}, {"type": "precision_at_5", "value": 5.733}, {"type": "recall_at_1", "value": 9.798}, {"type": "recall_at_10", "value": 25.562}, {"type": "recall_at_100", "value": 45.678999999999995}, {"type": "recall_at_1000", "value": 69.963}, {"type": "recall_at_3", "value": 16.705000000000002}, {"type": "recall_at_5", "value": 19.969}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackUnixRetrieval", "type": "cqadupstack/unix", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 19.1}, {"type": "map_at_10", "value": 27.034999999999997}, {"type": "map_at_100", "value": 28.396}, {"type": "map_at_1000", "value": 28.518}, {"type": "map_at_3", "value": 24.363}, {"type": "map_at_5", "value": 25.826999999999998}, {"type": "mrr_at_1", "value": 23.694000000000003}, {"type": "mrr_at_10", "value": 31.724999999999998}, {"type": "mrr_at_100", "value": 32.743}, {"type": "mrr_at_1000", "value": 32.82}, {"type": "mrr_at_3", "value": 29.275000000000002}, {"type": "mrr_at_5", "value": 30.684}, {"type": "ndcg_at_1", "value": 23.694000000000003}, {"type": "ndcg_at_10", "value": 32.366}, {"type": "ndcg_at_100", "value": 38.241}, {"type": "ndcg_at_1000", "value": 40.973}, {"type": "ndcg_at_3", "value": 27.661}, {"type": "ndcg_at_5", "value": 29.782999999999998}, {"type": "precision_at_1", "value": 23.694000000000003}, {"type": "precision_at_10", "value": 5.951}, {"type": "precision_at_100", "value": 1.0070000000000001}, {"type": "precision_at_1000", "value": 0.135}, {"type": "precision_at_3", "value": 13.34}, {"type": "precision_at_5", "value": 9.533999999999999}, {"type": "recall_at_1", "value": 19.1}, {"type": "recall_at_10", "value": 44.032}, {"type": "recall_at_100", "value": 69.186}, {"type": "recall_at_1000", "value": 88.562}, {"type": "recall_at_3", "value": 30.712}, {"type": "recall_at_5", "value": 36.372}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWebmastersRetrieval", "type": "cqadupstack/webmasters", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 20.671}, {"type": "map_at_10", "value": 28.583}, {"type": "map_at_100", "value": 30.098999999999997}, {"type": "map_at_1000", "value": 30.364}, {"type": "map_at_3", "value": 25.825}, {"type": "map_at_5", "value": 27.500999999999998}, {"type": "mrr_at_1", "value": 25.889}, {"type": "mrr_at_10", "value": 33.617999999999995}, {"type": "mrr_at_100", "value": 34.687}, {"type": "mrr_at_1000", "value": 34.774}, {"type": "mrr_at_3", "value": 31.191999999999997}, {"type": "mrr_at_5", "value": 32.675}, {"type": "ndcg_at_1", "value": 25.889}, {"type": "ndcg_at_10", "value": 34.056999999999995}, {"type": "ndcg_at_100", "value": 40.142}, {"type": "ndcg_at_1000", "value": 43.614000000000004}, {"type": "ndcg_at_3", "value": 29.688}, {"type": "ndcg_at_5", "value": 32.057}, {"type": "precision_at_1", "value": 25.889}, {"type": "precision_at_10", "value": 6.7}, {"type": "precision_at_100", "value": 1.417}, {"type": "precision_at_1000", "value": 0.241}, {"type": "precision_at_3", "value": 14.360999999999999}, {"type": "precision_at_5", "value": 10.711}, {"type": "recall_at_1", "value": 20.671}, {"type": "recall_at_10", "value": 43.97}, {"type": "recall_at_100", "value": 71.83699999999999}, {"type": "recall_at_1000", "value": 94.42399999999999}, {"type": "recall_at_3", "value": 31.0}, {"type": "recall_at_5", "value": 37.489}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB CQADupstackWordpressRetrieval", "type": "cqadupstack/wordpress", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 13.66}, {"type": "map_at_10", "value": 18.798000000000002}, {"type": "map_at_100", "value": 19.75}, {"type": "map_at_1000", "value": 19.851}, {"type": "map_at_3", "value": 16.874}, {"type": "map_at_5", "value": 18.136}, {"type": "mrr_at_1", "value": 14.972}, {"type": "mrr_at_10", "value": 20.565}, {"type": "mrr_at_100", "value": 21.488}, {"type": "mrr_at_1000", "value": 21.567}, {"type": "mrr_at_3", "value": 18.669}, {"type": "mrr_at_5", "value": 19.861}, {"type": "ndcg_at_1", "value": 14.972}, {"type": "ndcg_at_10", "value": 22.128999999999998}, {"type": "ndcg_at_100", "value": 27.028000000000002}, {"type": "ndcg_at_1000", "value": 29.887000000000004}, {"type": "ndcg_at_3", "value": 18.365000000000002}, {"type": "ndcg_at_5", "value": 20.48}, {"type": "precision_at_1", "value": 14.972}, {"type": "precision_at_10", "value": 3.549}, {"type": "precision_at_100", "value": 0.632}, {"type": "precision_at_1000", "value": 0.093}, {"type": "precision_at_3", "value": 7.887}, {"type": "precision_at_5", "value": 5.840999999999999}, {"type": "recall_at_1", "value": 13.66}, {"type": "recall_at_10", "value": 30.801000000000002}, {"type": "recall_at_100", "value": 53.626}, {"type": "recall_at_1000", "value": 75.634}, {"type": "recall_at_3", "value": 20.807000000000002}, {"type": "recall_at_5", "value": 25.86}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB ClimateFEVER", "type": "climate-fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 8.622}, {"type": "map_at_10", "value": 16.042}, {"type": "map_at_100", "value": 18.023}, {"type": "map_at_1000", "value": 18.228}, {"type": "map_at_3", "value": 12.995999999999999}, {"type": "map_at_5", "value": 14.424000000000001}, {"type": "mrr_at_1", "value": 18.892999999999997}, {"type": "mrr_at_10", "value": 30.575000000000003}, {"type": "mrr_at_100", "value": 31.814999999999998}, {"type": "mrr_at_1000", "value": 31.856}, {"type": "mrr_at_3", "value": 26.851000000000003}, {"type": "mrr_at_5", "value": 29.021}, {"type": "ndcg_at_1", "value": 18.892999999999997}, {"type": "ndcg_at_10", "value": 23.575}, {"type": "ndcg_at_100", "value": 31.713}, {"type": "ndcg_at_1000", "value": 35.465}, {"type": "ndcg_at_3", "value": 18.167}, {"type": "ndcg_at_5", "value": 20.071}, {"type": "precision_at_1", "value": 18.892999999999997}, {"type": "precision_at_10", "value": 7.883}, {"type": "precision_at_100", "value": 1.652}, {"type": "precision_at_1000", "value": 0.23500000000000001}, {"type": "precision_at_3", "value": 13.898}, {"type": "precision_at_5", "value": 11.14}, {"type": "recall_at_1", "value": 8.622}, {"type": "recall_at_10", "value": 30.044999999999998}, {"type": "recall_at_100", "value": 58.072}, {"type": "recall_at_1000", "value": 79.226}, {"type": "recall_at_3", "value": 17.21}, {"type": "recall_at_5", "value": 22.249}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB DBPedia", "type": "dbpedia-entity", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 4.845}, {"type": "map_at_10", "value": 12.352}, {"type": "map_at_100", "value": 17.423}, {"type": "map_at_1000", "value": 18.529}, {"type": "map_at_3", "value": 8.505}, {"type": "map_at_5", "value": 10.213}, {"type": "mrr_at_1", "value": 41.75}, {"type": "mrr_at_10", "value": 54.6}, {"type": "mrr_at_100", "value": 55.345}, {"type": "mrr_at_1000", "value": 55.374}, {"type": "mrr_at_3", "value": 52.37500000000001}, {"type": "mrr_at_5", "value": 53.87499999999999}, {"type": "ndcg_at_1", "value": 31.25}, {"type": "ndcg_at_10", "value": 26.779999999999998}, {"type": "ndcg_at_100", "value": 31.929000000000002}, {"type": "ndcg_at_1000", "value": 39.290000000000006}, {"type": "ndcg_at_3", "value": 28.746}, {"type": "ndcg_at_5", "value": 27.334999999999997}, {"type": "precision_at_1", "value": 41.75}, {"type": "precision_at_10", "value": 22.55}, {"type": "precision_at_100", "value": 7.242}, {"type": "precision_at_1000", "value": 1.439}, {"type": "precision_at_3", "value": 33.833}, {"type": "precision_at_5", "value": 28.65}, {"type": "recall_at_1", "value": 4.845}, {"type": "recall_at_10", "value": 18.664}, {"type": "recall_at_100", "value": 41.085}, {"type": "recall_at_1000", "value": 65.242}, {"type": "recall_at_3", "value": 10.572}, {"type": "recall_at_5", "value": 13.961000000000002}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB EmotionClassification", "type": "mteb/emotion", "config": "default", "split": "test", "revision": "4f58c6b202a23cf9a4da393831edf4f9183cad37"}, "metrics": [{"type": "accuracy", "value": 47.08}, {"type": "f1", "value": 42.843345856303756}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FEVER", "type": "fever", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 33.743}, {"type": "map_at_10", "value": 46.521}, {"type": "map_at_100", "value": 47.235}, {"type": "map_at_1000", "value": 47.272}, {"type": "map_at_3", "value": 43.252}, {"type": "map_at_5", "value": 45.267}, {"type": "mrr_at_1", "value": 36.484}, {"type": "mrr_at_10", "value": 49.406}, {"type": "mrr_at_100", "value": 50.03300000000001}, {"type": "mrr_at_1000", "value": 50.058}, {"type": "mrr_at_3", "value": 46.195}, {"type": "mrr_at_5", "value": 48.193999999999996}, {"type": "ndcg_at_1", "value": 36.484}, {"type": "ndcg_at_10", "value": 53.42}, {"type": "ndcg_at_100", "value": 56.69499999999999}, {"type": "ndcg_at_1000", "value": 57.623999999999995}, {"type": "ndcg_at_3", "value": 47.010999999999996}, {"type": "ndcg_at_5", "value": 50.524}, {"type": "precision_at_1", "value": 36.484}, {"type": "precision_at_10", "value": 7.925}, {"type": "precision_at_100", "value": 0.975}, {"type": "precision_at_1000", "value": 0.107}, {"type": "precision_at_3", "value": 19.967}, {"type": "precision_at_5", "value": 13.87}, {"type": "recall_at_1", "value": 33.743}, {"type": "recall_at_10", "value": 71.988}, {"type": "recall_at_100", "value": 86.60799999999999}, {"type": "recall_at_1000", "value": 93.54}, {"type": "recall_at_3", "value": 54.855}, {"type": "recall_at_5", "value": 63.341}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB FiQA2018", "type": "fiqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 13.003}, {"type": "map_at_10", "value": 21.766}, {"type": "map_at_100", "value": 23.618}, {"type": "map_at_1000", "value": 23.832}, {"type": "map_at_3", "value": 18.282999999999998}, {"type": "map_at_5", "value": 20.267}, {"type": "mrr_at_1", "value": 26.851999999999997}, {"type": "mrr_at_10", "value": 34.658}, {"type": "mrr_at_100", "value": 35.729}, {"type": "mrr_at_1000", "value": 35.785}, {"type": "mrr_at_3", "value": 31.686999999999998}, {"type": "mrr_at_5", "value": 33.315}, {"type": "ndcg_at_1", "value": 26.851999999999997}, {"type": "ndcg_at_10", "value": 28.563}, {"type": "ndcg_at_100", "value": 36.374}, {"type": "ndcg_at_1000", "value": 40.306999999999995}, {"type": "ndcg_at_3", "value": 24.224}, {"type": "ndcg_at_5", "value": 25.939}, {"type": "precision_at_1", "value": 26.851999999999997}, {"type": "precision_at_10", "value": 8.193999999999999}, {"type": "precision_at_100", "value": 1.616}, {"type": "precision_at_1000", "value": 0.232}, {"type": "precision_at_3", "value": 16.255}, {"type": "precision_at_5", "value": 12.469}, {"type": "recall_at_1", "value": 13.003}, {"type": "recall_at_10", "value": 35.689}, {"type": "recall_at_100", "value": 65.762}, {"type": "recall_at_1000", "value": 89.546}, {"type": "recall_at_3", "value": 21.820999999999998}, {"type": "recall_at_5", "value": 28.097}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB HotpotQA", "type": "hotpotqa", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 29.541}, {"type": "map_at_10", "value": 43.088}, {"type": "map_at_100", "value": 44.252}, {"type": "map_at_1000", "value": 44.345}, {"type": "map_at_3", "value": 39.79}, {"type": "map_at_5", "value": 41.687000000000005}, {"type": "mrr_at_1", "value": 59.082}, {"type": "mrr_at_10", "value": 67.27300000000001}, {"type": "mrr_at_100", "value": 67.708}, {"type": "mrr_at_1000", "value": 67.731}, {"type": "mrr_at_3", "value": 65.526}, {"type": "mrr_at_5", "value": 66.589}, {"type": "ndcg_at_1", "value": 59.082}, {"type": "ndcg_at_10", "value": 52.372}, {"type": "ndcg_at_100", "value": 56.725}, {"type": "ndcg_at_1000", "value": 58.665}, {"type": "ndcg_at_3", "value": 47.129}, {"type": "ndcg_at_5", "value": 49.808}, {"type": "precision_at_1", "value": 59.082}, {"type": "precision_at_10", "value": 11.275}, {"type": "precision_at_100", "value": 1.469}, {"type": "precision_at_1000", "value": 0.173}, {"type": "precision_at_3", "value": 29.773}, {"type": "precision_at_5", "value": 19.980999999999998}, {"type": "recall_at_1", "value": 29.541}, {"type": "recall_at_10", "value": 56.374}, {"type": "recall_at_100", "value": 73.42999999999999}, {"type": "recall_at_1000", "value": 86.28}, {"type": "recall_at_3", "value": 44.659}, {"type": "recall_at_5", "value": 49.952999999999996}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ImdbClassification", "type": "mteb/imdb", "config": "default", "split": "test", "revision": "3d86128a09e091d6018b6d26cad27f2739fc2db7"}, "metrics": [{"type": "accuracy", "value": 75.1904}, {"type": "ap", "value": 69.80555086826531}, {"type": "f1", "value": 74.93725389065787}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB MSMARCO", "type": "msmarco", "config": "default", "split": "dev", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 7.085}, {"type": "map_at_10", "value": 13.344000000000001}, {"type": "map_at_100", "value": 14.501}, {"type": "map_at_1000", "value": 14.605}, {"type": "map_at_3", "value": 10.758}, {"type": "map_at_5", "value": 12.162}, {"type": "mrr_at_1", "value": 7.278}, {"type": "mrr_at_10", "value": 13.607}, {"type": "mrr_at_100", "value": 14.761}, {"type": "mrr_at_1000", "value": 14.860000000000001}, {"type": "mrr_at_3", "value": 11.003}, {"type": "mrr_at_5", "value": 12.421}, {"type": "ndcg_at_1", "value": 7.278}, {"type": "ndcg_at_10", "value": 17.473}, {"type": "ndcg_at_100", "value": 23.721}, {"type": "ndcg_at_1000", "value": 26.69}, {"type": "ndcg_at_3", "value": 12.078}, {"type": "ndcg_at_5", "value": 14.62}, {"type": "precision_at_1", "value": 7.278}, {"type": "precision_at_10", "value": 3.175}, {"type": "precision_at_100", "value": 0.639}, {"type": "precision_at_1000", "value": 0.09}, {"type": "precision_at_3", "value": 5.382}, {"type": "precision_at_5", "value": 4.519}, {"type": "recall_at_1", "value": 7.085}, {"type": "recall_at_10", "value": 30.549}, {"type": "recall_at_100", "value": 60.919999999999995}, {"type": "recall_at_1000", "value": 84.372}, {"type": "recall_at_3", "value": 15.675}, {"type": "recall_at_5", "value": 21.818}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPDomainClassification (en)", "type": "mteb/mtop_domain", "config": "en", "split": "test", "revision": "d80d48c1eb48d3562165c59d59d0034df9fff0bf"}, "metrics": [{"type": "accuracy", "value": 94.46876424988601}, {"type": "f1", "value": 94.23159241922738}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MTOPIntentClassification (en)", "type": "mteb/mtop_intent", "config": "en", "split": "test", "revision": "ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba"}, "metrics": [{"type": "accuracy", "value": 81.0875512995896}, {"type": "f1", "value": 61.674961674414}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveIntentClassification (en)", "type": "mteb/amazon_massive_intent", "config": "en", "split": "test", "revision": "31efe3c427b0bae9c22cbb560b8f15491cc6bed7"}, "metrics": [{"type": "accuracy", "value": 75.01344989912575}, {"type": "f1", "value": 71.7942527839921}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB MassiveScenarioClassification (en)", "type": "mteb/amazon_massive_scenario", "config": "en", "split": "test", "revision": "7d571f92784cd94a019292a1f45445077d0ef634"}, "metrics": [{"type": "accuracy", "value": 79.15601882985877}, {"type": "f1", "value": 78.82502954601195}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringP2P", "type": "mteb/medrxiv-clustering-p2p", "config": "default", "split": "test", "revision": "e7a26af6f3ae46b30dde8737f02c07b1505bcc73"}, "metrics": [{"type": "v_measure", "value": 31.468806971345227}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB MedrxivClusteringS2S", "type": "mteb/medrxiv-clustering-s2s", "config": "default", "split": "test", "revision": "35191c8c0dca72d8ff3efcd72aa802307d469663"}, "metrics": [{"type": "v_measure", "value": 27.874332804382256}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB MindSmallReranking", "type": "mteb/mind_small", "config": "default", "split": "test", "revision": "3bdac13927fdc888b903db93b2ffdbd90b295a69"}, "metrics": [{"type": "map", "value": 30.099340785595842}, {"type": "mrr", "value": 31.077367694660257}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NFCorpus", "type": "nfcorpus", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 3.9050000000000002}, {"type": "map_at_10", "value": 8.931000000000001}, {"type": "map_at_100", "value": 11.246}, {"type": "map_at_1000", "value": 12.579}, {"type": "map_at_3", "value": 6.544}, {"type": "map_at_5", "value": 7.854}, {"type": "mrr_at_1", "value": 33.745999999999995}, {"type": "mrr_at_10", "value": 44.734}, {"type": "mrr_at_100", "value": 45.486}, {"type": "mrr_at_1000", "value": 45.534}, {"type": "mrr_at_3", "value": 42.157}, {"type": "mrr_at_5", "value": 43.813}, {"type": "ndcg_at_1", "value": 31.734}, {"type": "ndcg_at_10", "value": 26.284999999999997}, {"type": "ndcg_at_100", "value": 25.211}, {"type": "ndcg_at_1000", "value": 34.974}, {"type": "ndcg_at_3", "value": 29.918}, {"type": "ndcg_at_5", "value": 29.066}, {"type": "precision_at_1", "value": 33.745999999999995}, {"type": "precision_at_10", "value": 19.628}, {"type": "precision_at_100", "value": 6.476999999999999}, {"type": "precision_at_1000", "value": 1.976}, {"type": "precision_at_3", "value": 28.793000000000003}, {"type": "precision_at_5", "value": 25.759}, {"type": "recall_at_1", "value": 3.9050000000000002}, {"type": "recall_at_10", "value": 13.375}, {"type": "recall_at_100", "value": 28.453}, {"type": "recall_at_1000", "value": 61.67399999999999}, {"type": "recall_at_3", "value": 7.774}, {"type": "recall_at_5", "value": 10.754}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB NQ", "type": "nq", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 18.33}, {"type": "map_at_10", "value": 30.44}, {"type": "map_at_100", "value": 31.848}, {"type": "map_at_1000", "value": 31.906000000000002}, {"type": "map_at_3", "value": 26.143}, {"type": "map_at_5", "value": 28.583}, {"type": "mrr_at_1", "value": 21.031}, {"type": "mrr_at_10", "value": 33.028}, {"type": "mrr_at_100", "value": 34.166000000000004}, {"type": "mrr_at_1000", "value": 34.208}, {"type": "mrr_at_3", "value": 29.089}, {"type": "mrr_at_5", "value": 31.362000000000002}, {"type": "ndcg_at_1", "value": 21.031}, {"type": "ndcg_at_10", "value": 37.65}, {"type": "ndcg_at_100", "value": 43.945}, {"type": "ndcg_at_1000", "value": 45.338}, {"type": "ndcg_at_3", "value": 29.256999999999998}, {"type": "ndcg_at_5", "value": 33.453}, {"type": "precision_at_1", "value": 21.031}, {"type": "precision_at_10", "value": 6.8309999999999995}, {"type": "precision_at_100", "value": 1.035}, {"type": "precision_at_1000", "value": 0.117}, {"type": "precision_at_3", "value": 13.818}, {"type": "precision_at_5", "value": 10.649000000000001}, {"type": "recall_at_1", "value": 18.33}, {"type": "recall_at_10", "value": 57.330999999999996}, {"type": "recall_at_100", "value": 85.284}, {"type": "recall_at_1000", "value": 95.676}, {"type": "recall_at_3", "value": 35.356}, {"type": "recall_at_5", "value": 45.073}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB QuoraRetrieval", "type": "quora", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 66.373}, {"type": "map_at_10", "value": 80.233}, {"type": "map_at_100", "value": 80.973}, {"type": "map_at_1000", "value": 80.99499999999999}, {"type": "map_at_3", "value": 77.127}, {"type": "map_at_5", "value": 79.056}, {"type": "mrr_at_1", "value": 76.55}, {"type": "mrr_at_10", "value": 83.813}, {"type": "mrr_at_100", "value": 83.96900000000001}, {"type": "mrr_at_1000", "value": 83.97200000000001}, {"type": "mrr_at_3", "value": 82.547}, {"type": "mrr_at_5", "value": 83.38600000000001}, {"type": "ndcg_at_1", "value": 76.53999999999999}, {"type": "ndcg_at_10", "value": 84.638}, {"type": "ndcg_at_100", "value": 86.28099999999999}, {"type": "ndcg_at_1000", "value": 86.459}, {"type": "ndcg_at_3", "value": 81.19}, {"type": "ndcg_at_5", "value": 83.057}, {"type": "precision_at_1", "value": 76.53999999999999}, {"type": "precision_at_10", "value": 12.928999999999998}, {"type": "precision_at_100", "value": 1.514}, {"type": "precision_at_1000", "value": 0.156}, {"type": "precision_at_3", "value": 35.503}, {"type": "precision_at_5", "value": 23.512}, {"type": "recall_at_1", "value": 66.373}, {"type": "recall_at_10", "value": 93.273}, {"type": "recall_at_100", "value": 99.031}, {"type": "recall_at_1000", "value": 99.91799999999999}, {"type": "recall_at_3", "value": 83.55799999999999}, {"type": "recall_at_5", "value": 88.644}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClustering", "type": "mteb/reddit-clustering", "config": "default", "split": "test", "revision": "24640382cdbf8abc73003fb0fa6d111a705499eb"}, "metrics": [{"type": "v_measure", "value": 43.67174666339103}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB RedditClusteringP2P", "type": "mteb/reddit-clustering-p2p", "config": "default", "split": "test", "revision": "282350215ef01743dc01b456c7f5241fa8937f16"}, "metrics": [{"type": "v_measure", "value": 61.66838659211271}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SCIDOCS", "type": "scidocs", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 2.318}, {"type": "map_at_10", "value": 5.938000000000001}, {"type": "map_at_100", "value": 7.582}, {"type": "map_at_1000", "value": 7.936}, {"type": "map_at_3", "value": 4.208}, {"type": "map_at_5", "value": 5.098}, {"type": "mrr_at_1", "value": 11.4}, {"type": "mrr_at_10", "value": 17.655}, {"type": "mrr_at_100", "value": 19.088}, {"type": "mrr_at_1000", "value": 19.203}, {"type": "mrr_at_3", "value": 15.25}, {"type": "mrr_at_5", "value": 16.535}, {"type": "ndcg_at_1", "value": 11.4}, {"type": "ndcg_at_10", "value": 10.388}, {"type": "ndcg_at_100", "value": 18.165}, {"type": "ndcg_at_1000", "value": 24.842}, {"type": "ndcg_at_3", "value": 9.414}, {"type": "ndcg_at_5", "value": 8.453}, {"type": "precision_at_1", "value": 11.4}, {"type": "precision_at_10", "value": 5.54}, {"type": "precision_at_100", "value": 1.71}, {"type": "precision_at_1000", "value": 0.33}, {"type": "precision_at_3", "value": 8.866999999999999}, {"type": "precision_at_5", "value": 7.580000000000001}, {"type": "recall_at_1", "value": 2.318}, {"type": "recall_at_10", "value": 11.267000000000001}, {"type": "recall_at_100", "value": 34.743}, {"type": "recall_at_1000", "value": 67.07300000000001}, {"type": "recall_at_3", "value": 5.408}, {"type": "recall_at_5", "value": 7.713}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB SICK-R", "type": "mteb/sickr-sts", "config": "default", "split": "test", "revision": "a6ea5a8cab320b040a23452cc28066d9beae2cee"}, "metrics": [{"type": "cos_sim_spearman", "value": 72.15850185456762}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS12", "type": "mteb/sts12-sts", "config": "default", "split": "test", "revision": "a0d554a64d88156834ff5ae9920b964011b16384"}, "metrics": [{"type": "cos_sim_spearman", "value": 61.59518395985063}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS13", "type": "mteb/sts13-sts", "config": "default", "split": "test", "revision": "7e90230a92c190f1bf69ae9002b8cea547a64cca"}, "metrics": [{"type": "cos_sim_spearman", "value": 79.71131323749228}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS14", "type": "mteb/sts14-sts", "config": "default", "split": "test", "revision": "6031580fec1f6af667f0bd2da0a551cf4f0b2375"}, "metrics": [{"type": "cos_sim_spearman", "value": 72.10974664733891}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS15", "type": "mteb/sts15-sts", "config": "default", "split": "test", "revision": "ae752c7c21bf194d8b67fd573edf7ae58183cbe3"}, "metrics": [{"type": "cos_sim_spearman", "value": 82.17899407125657}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS16", "type": "mteb/sts16-sts", "config": "default", "split": "test", "revision": "4d8694f8f0e0100860b497b999b3dbed754a0513"}, "metrics": [{"type": "cos_sim_spearman", "value": 79.41138579273438}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS17 (en-en)", "type": "mteb/sts17-crosslingual-sts", "config": "en-en", "split": "test", "revision": "af5e6fb845001ecf41f4c1e033ce921939a2a68d"}, "metrics": [{"type": "cos_sim_spearman", "value": 85.44343473477939}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STS22 (en)", "type": "mteb/sts22-crosslingual-sts", "config": "en", "split": "test", "revision": "6d1ba47164174a496b7fa5d3569dae26a6813b80"}, "metrics": [{"type": "cos_sim_spearman", "value": 63.90264271389905}]}, {"task": {"type": "STS"}, "dataset": {"name": "MTEB STSBenchmark", "type": "mteb/stsbenchmark-sts", "config": "default", "split": "test", "revision": "b0fddb56ed78048fa8b90373c8a3cfc37b684831"}, "metrics": [{"type": "cos_sim_spearman", "value": 77.44151296326804}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB SciDocsRR", "type": "mteb/scidocs-reranking", "config": "default", "split": "test", "revision": "d3c5e1fc0b855ab6097bf1cda04dd73947d7caab"}, "metrics": [{"type": "map", "value": 76.27597486396654}, {"type": "mrr", "value": 93.28127119793788}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB SciFact", "type": "scifact", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 49.594}, {"type": "map_at_10", "value": 60.951}, {"type": "map_at_100", "value": 61.68599999999999}, {"type": "map_at_1000", "value": 61.712}, {"type": "map_at_3", "value": 57.946}, {"type": "map_at_5", "value": 59.89}, {"type": "mrr_at_1", "value": 52.666999999999994}, {"type": "mrr_at_10", "value": 62.724000000000004}, {"type": "mrr_at_100", "value": 63.269}, {"type": "mrr_at_1000", "value": 63.291}, {"type": "mrr_at_3", "value": 60.167}, {"type": "mrr_at_5", "value": 61.95}, {"type": "ndcg_at_1", "value": 52.666999999999994}, {"type": "ndcg_at_10", "value": 66.35600000000001}, {"type": "ndcg_at_100", "value": 69.463}, {"type": "ndcg_at_1000", "value": 70.111}, {"type": "ndcg_at_3", "value": 60.901}, {"type": "ndcg_at_5", "value": 64.054}, {"type": "precision_at_1", "value": 52.666999999999994}, {"type": "precision_at_10", "value": 9.0}, {"type": "precision_at_100", "value": 1.073}, {"type": "precision_at_1000", "value": 0.11299999999999999}, {"type": "precision_at_3", "value": 24.221999999999998}, {"type": "precision_at_5", "value": 16.333000000000002}, {"type": "recall_at_1", "value": 49.594}, {"type": "recall_at_10", "value": 81.256}, {"type": "recall_at_100", "value": 94.989}, {"type": "recall_at_1000", "value": 100.0}, {"type": "recall_at_3", "value": 66.706}, {"type": "recall_at_5", "value": 74.411}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB SprintDuplicateQuestions", "type": "mteb/sprintduplicatequestions-pairclassification", "config": "default", "split": "test", "revision": "d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46"}, "metrics": [{"type": "cos_sim_accuracy", "value": 99.65049504950495}, {"type": "cos_sim_ap", "value": 88.1421623503371}, {"type": "cos_sim_f1", "value": 81.44072036018008}, {"type": "cos_sim_precision", "value": 81.48148148148148}, {"type": "cos_sim_recall", "value": 81.39999999999999}, {"type": "dot_accuracy", "value": 99.37623762376238}, {"type": "dot_ap", "value": 69.87152032240303}, {"type": "dot_f1", "value": 65.64885496183206}, {"type": "dot_precision", "value": 72.18225419664267}, {"type": "dot_recall", "value": 60.199999999999996}, {"type": "euclidean_accuracy", "value": 99.63069306930693}, {"type": "euclidean_ap", "value": 86.13858297902517}, {"type": "euclidean_f1", "value": 79.87679671457904}, {"type": "euclidean_precision", "value": 82.0675105485232}, {"type": "euclidean_recall", "value": 77.8}, {"type": "manhattan_accuracy", "value": 99.63168316831683}, {"type": "manhattan_ap", "value": 86.31976532265482}, {"type": "manhattan_f1", "value": 80.10204081632654}, {"type": "manhattan_precision", "value": 81.77083333333334}, {"type": "manhattan_recall", "value": 78.5}, {"type": "max_accuracy", "value": 99.65049504950495}, {"type": "max_ap", "value": 88.1421623503371}, {"type": "max_f1", "value": 81.44072036018008}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClustering", "type": "mteb/stackexchange-clustering", "config": "default", "split": "test", "revision": "6cbc1f7b2bc0622f2e39d2c77fa502909748c259"}, "metrics": [{"type": "v_measure", "value": 68.19604139959692}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB StackExchangeClusteringP2P", "type": "mteb/stackexchange-clustering-p2p", "config": "default", "split": "test", "revision": "815ca46b2622cec33ccafc3735d572c266efdb44"}, "metrics": [{"type": "v_measure", "value": 36.3569584557381}]}, {"task": {"type": "Reranking"}, "dataset": {"name": "MTEB StackOverflowDupQuestions", "type": "mteb/stackoverflowdupquestions-reranking", "config": "default", "split": "test", "revision": "e185fbe320c72810689fc5848eb6114e1ef5ec69"}, "metrics": [{"type": "map", "value": 48.82174503355024}, {"type": "mrr", "value": 49.610933388506915}]}, {"task": {"type": "Summarization"}, "dataset": {"name": "MTEB SummEval", "type": "mteb/summeval", "config": "default", "split": "test", "revision": "cda12ad7615edc362dbf25a00fdd61d3b1eaf93c"}, "metrics": [{"type": "cos_sim_pearson", "value": 30.805895993742798}, {"type": "cos_sim_spearman", "value": 31.445431226826738}, {"type": "dot_pearson", "value": 24.441585432516867}, {"type": "dot_spearman", "value": 25.468117334810188}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB TRECCOVID", "type": "trec-covid", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 0.2}, {"type": "map_at_10", "value": 1.431}, {"type": "map_at_100", "value": 7.138999999999999}, {"type": "map_at_1000", "value": 17.933}, {"type": "map_at_3", "value": 0.551}, {"type": "map_at_5", "value": 0.7979999999999999}, {"type": "mrr_at_1", "value": 76.0}, {"type": "mrr_at_10", "value": 85.167}, {"type": "mrr_at_100", "value": 85.21300000000001}, {"type": "mrr_at_1000", "value": 85.21300000000001}, {"type": "mrr_at_3", "value": 84.667}, {"type": "mrr_at_5", "value": 85.167}, {"type": "ndcg_at_1", "value": 72.0}, {"type": "ndcg_at_10", "value": 63.343}, {"type": "ndcg_at_100", "value": 45.739999999999995}, {"type": "ndcg_at_1000", "value": 41.875}, {"type": "ndcg_at_3", "value": 68.162}, {"type": "ndcg_at_5", "value": 65.666}, {"type": "precision_at_1", "value": 76.0}, {"type": "precision_at_10", "value": 66.4}, {"type": "precision_at_100", "value": 46.800000000000004}, {"type": "precision_at_1000", "value": 18.996}, {"type": "precision_at_3", "value": 72.667}, {"type": "precision_at_5", "value": 68.4}, {"type": "recall_at_1", "value": 0.2}, {"type": "recall_at_10", "value": 1.712}, {"type": "recall_at_100", "value": 10.896}, {"type": "recall_at_1000", "value": 40.115}, {"type": "recall_at_3", "value": 0.594}, {"type": "recall_at_5", "value": 0.889}]}, {"task": {"type": "Retrieval"}, "dataset": {"name": "MTEB Touche2020", "type": "webis-touche2020", "config": "default", "split": "test", "revision": "None"}, "metrics": [{"type": "map_at_1", "value": 1.0619999999999998}, {"type": "map_at_10", "value": 5.611}, {"type": "map_at_100", "value": 8.841000000000001}, {"type": "map_at_1000", "value": 10.154}, {"type": "map_at_3", "value": 2.7720000000000002}, {"type": "map_at_5", "value": 4.181}, {"type": "mrr_at_1", "value": 14.285999999999998}, {"type": "mrr_at_10", "value": 26.249}, {"type": "mrr_at_100", "value": 28.046}, {"type": "mrr_at_1000", "value": 28.083000000000002}, {"type": "mrr_at_3", "value": 21.769}, {"type": "mrr_at_5", "value": 24.524}, {"type": "ndcg_at_1", "value": 11.224}, {"type": "ndcg_at_10", "value": 12.817}, {"type": "ndcg_at_100", "value": 23.183999999999997}, {"type": "ndcg_at_1000", "value": 35.099000000000004}, {"type": "ndcg_at_3", "value": 11.215}, {"type": "ndcg_at_5", "value": 12.016}, {"type": "precision_at_1", "value": 14.285999999999998}, {"type": "precision_at_10", "value": 12.653}, {"type": "precision_at_100", "value": 5.306}, {"type": "precision_at_1000", "value": 1.294}, {"type": "precision_at_3", "value": 13.605}, {"type": "precision_at_5", "value": 13.877999999999998}, {"type": "recall_at_1", "value": 1.0619999999999998}, {"type": "recall_at_10", "value": 10.377}, {"type": "recall_at_100", "value": 34.77}, {"type": "recall_at_1000", "value": 70.875}, {"type": "recall_at_3", "value": 3.688}, {"type": "recall_at_5", "value": 6.2509999999999994}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB ToxicConversationsClassification", "type": "mteb/toxic_conversations_50k", "config": "default", "split": "test", "revision": "d7c0de2777da35d6aae2200a62c6e0e5af397c4c"}, "metrics": [{"type": "accuracy", "value": 71.8488}, {"type": "ap", "value": 15.590122317097372}, {"type": "f1", "value": 55.86108396102662}]}, {"task": {"type": "Classification"}, "dataset": {"name": "MTEB TweetSentimentExtractionClassification", "type": "mteb/tweet_sentiment_extraction", "config": "default", "split": "test", "revision": "d604517c81ca91fe16a244d1248fc021f9ecee7a"}, "metrics": [{"type": "accuracy", "value": 57.61460101867573}, {"type": "f1", "value": 57.8678726826158}]}, {"task": {"type": "Clustering"}, "dataset": {"name": "MTEB TwentyNewsgroupsClustering", "type": "mteb/twentynewsgroups-clustering", "config": "default", "split": "test", "revision": "6125ec4e24fa026cec8a478383ee943acfbd5449"}, "metrics": [{"type": "v_measure", "value": 32.01459876897588}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterSemEval2015", "type": "mteb/twittersemeval2015-pairclassification", "config": "default", "split": "test", "revision": "70970daeab8776df92f5ea462b6173c0b46fd2d1"}, "metrics": [{"type": "cos_sim_accuracy", "value": 84.1032365738809}, {"type": "cos_sim_ap", "value": 66.60137415520323}, {"type": "cos_sim_f1", "value": 62.12845010615712}, {"type": "cos_sim_precision", "value": 62.493326214628944}, {"type": "cos_sim_recall", "value": 61.76781002638523}, {"type": "dot_accuracy", "value": 81.85015199380103}, {"type": "dot_ap", "value": 58.854644211365084}, {"type": "dot_f1", "value": 56.15180082185158}, {"type": "dot_precision", "value": 51.806422836752894}, {"type": "dot_recall", "value": 61.2928759894459}, {"type": "euclidean_accuracy", "value": 83.6681170650295}, {"type": "euclidean_ap", "value": 64.93555585305603}, {"type": "euclidean_f1", "value": 61.02775195857125}, {"type": "euclidean_precision", "value": 61.42742582197273}, {"type": "euclidean_recall", "value": 60.633245382585756}, {"type": "manhattan_accuracy", "value": 83.73368301841808}, {"type": "manhattan_ap", "value": 65.45422483039611}, {"type": "manhattan_f1", "value": 61.58552806597499}, {"type": "manhattan_precision", "value": 62.09763948497854}, {"type": "manhattan_recall", "value": 61.08179419525066}, {"type": "max_accuracy", "value": 84.1032365738809}, {"type": "max_ap", "value": 66.60137415520323}, {"type": "max_f1", "value": 62.12845010615712}]}, {"task": {"type": "PairClassification"}, "dataset": {"name": "MTEB TwitterURLCorpus", "type": "mteb/twitterurlcorpus-pairclassification", "config": "default", "split": "test", "revision": "8b6510b0b1fa4e4c4f879467980e9be563ec1cdf"}, "metrics": [{"type": "cos_sim_accuracy", "value": 86.36628245430201}, {"type": "cos_sim_ap", "value": 79.29963896460292}, {"type": "cos_sim_f1", "value": 72.63895990066467}, {"type": "cos_sim_precision", "value": 69.09128803668196}, {"type": "cos_sim_recall", "value": 76.57068062827224}, {"type": "dot_accuracy", "value": 84.65091007878294}, {"type": "dot_ap", "value": 75.04883449222972}, {"type": "dot_f1", "value": 69.18569117382708}, {"type": "dot_precision", "value": 64.89512376070682}, {"type": "dot_recall", "value": 74.08376963350786}, {"type": "euclidean_accuracy", "value": 85.88116583226608}, {"type": "euclidean_ap", "value": 78.42687640324908}, {"type": "euclidean_f1", "value": 71.74350111107192}, {"type": "euclidean_precision", "value": 66.19800820152314}, {"type": "euclidean_recall", "value": 78.3030489682784}, {"type": "manhattan_accuracy", "value": 86.27508052935926}, {"type": "manhattan_ap", "value": 79.29581298930101}, {"type": "manhattan_f1", "value": 72.51838235294117}, {"type": "manhattan_precision", "value": 67.03921568627452}, {"type": "manhattan_recall", "value": 78.97289805974745}, {"type": "max_accuracy", "value": 86.36628245430201}, {"type": "max_ap", "value": 79.29963896460292}, {"type": "max_f1", "value": 72.63895990066467}]}]}]} | McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-unsup-simcse | null | [
"peft",
"safetensors",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"arxiv:2404.05961",
"license:mit",
"model-index",
"region:us"
] | null | 2024-04-30T02:45:32+00:00 | [
"2404.05961"
] | [
"en"
] | TAGS
#peft #safetensors #text-embedding #embeddings #information-retrieval #beir #text-classification #language-model #text-clustering #text-semantic-similarity #text-evaluation #text-reranking #feature-extraction #sentence-similarity #Sentence Similarity #natural_questions #ms_marco #fever #hotpot_qa #mteb #en #arxiv-2404.05961 #license-mit #model-index #region-us
|
> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.
- Repository: URL
- Paper: URL
## Installation
## Usage
## Questions
If you have any question about the code, feel free to email Parishad ('parishad.behnamghader@URL') and Vaibhav ('vaibhav.adlakha@URL'). | [
"## Installation",
"## Usage",
"## Questions\nIf you have any question about the code, feel free to email Parishad ('parishad.behnamghader@URL') and Vaibhav ('vaibhav.adlakha@URL')."
] | [
"TAGS\n#peft #safetensors #text-embedding #embeddings #information-retrieval #beir #text-classification #language-model #text-clustering #text-semantic-similarity #text-evaluation #text-reranking #feature-extraction #sentence-similarity #Sentence Similarity #natural_questions #ms_marco #fever #hotpot_qa #mteb #en #arxiv-2404.05961 #license-mit #model-index #region-us \n",
"## Installation",
"## Usage",
"## Questions\nIf you have any question about the code, feel free to email Parishad ('parishad.behnamghader@URL') and Vaibhav ('vaibhav.adlakha@URL')."
] | [
111,
3,
3,
55
] | [
"TAGS\n#peft #safetensors #text-embedding #embeddings #information-retrieval #beir #text-classification #language-model #text-clustering #text-semantic-similarity #text-evaluation #text-reranking #feature-extraction #sentence-similarity #Sentence Similarity #natural_questions #ms_marco #fever #hotpot_qa #mteb #en #arxiv-2404.05961 #license-mit #model-index #region-us \n## Installation## Usage## Questions\nIf you have any question about the code, feel free to email Parishad ('parishad.behnamghader@URL') and Vaibhav ('vaibhav.adlakha@URL')."
] |
text-generation | transformers |
# Model Details
Saltlux, AI Labs ์ธ์ด๋ชจ๋ธํ์์ ํ์ต ๋ฐ ๊ณต๊ฐํ <b>Ko-Llama3-Luxia-8B</b> ๋ชจ๋ธ์ Meta์์ ์ถ์ํ Llama-3-8B ๋ชจ๋ธ์ <b>ํ๊ตญ์ด์ ํนํ</b>ํ ๋ชจ๋ธ์
๋๋ค.<br><br>
์์ฒด ๋ณด์ ํ๊ณ ์๋ 1TB ์ด์์ ํ๊ตญ์ด ํ์ต ๋ฐ์ดํฐ ์ค, ์ฝ 100GB ์ ๋์ ๋ฐ์ดํฐ๋ฅผ ์ ๋ณํ์ฌ ์ฌ์ ํ์ต์ ํ์ฉํ์์ต๋๋ค.<br><br>
๋ํ ๊ณต๊ฐ๋ Llama-3 Tokenizer๋ฅผ ํ๊ตญ์ด๋ก ํ์ฅํ๊ณ ์ฌ์ ํ์ต์ ํ์ฉํ์ต๋๋ค.
- **Meta Llama-3:** Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
- **License:** Llama3 License [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
### Intended Use
Ko-Llama3-Luxia-8B๋ ์ฐ๊ตฌ์ฉ์ผ๋ก ์ ์๋์์ผ๋ฉฐ, ๋ค์ํ ์์ฐ์ด ์์ฑ ํ์คํฌ๋ฅผ ์ํด ์์ ๋กญ๊ฒ ํ์ต ๋ฐ ํ์ฉํ ์ ์์ต๋๋ค.
### How to Use
ํด๋น ๋ชจ๋ธ ์นด๋์๋ `Ko-Llama3-Luxia-8B` ๋ชจ๋ธ๊ณผ transformers ๋ผ์ด๋ธ๋ฌ๋ฆฌ ๊ธฐ๋ฐ์ ์์ ์ฝ๋๋ฅผ ์ ๊ณตํฉ๋๋ค.
```
import transformers
import torch
model_id = "saltlux/Ko-Llama3-Luxia-8B"
pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline("<|begin_of_text|>์๋
ํ์ธ์. ์ํธ๋ฃฉ์ค AI Labs ์
๋๋ค.")
```
# Training Details
ํ๊ตญ์ด ํนํ๋ฅผ ์ํ ์ฌ์ ํ์ต ๋ฐ์ดํฐ๋ Saltlux์์ ๋ณด์ ํ ๋ด์ค, ๋ฒ๋ฅ , ํนํ, ์๋ฃ, ์ญ์ฌ, ์ฌํ, ๋ฌธํ, ๋ํ(๋ฌธ์ด/๊ตฌ์ด) ๋ฑ์ ๋๋ฉ์ธ์ผ๋ก ๊ตฌ์ฑ๋ 100GB ์์ค์ ์ฝํผ์ค(~2023๋
)๋ฅผ ํ์ฉํ์์ต๋๋ค.<br>
- ํ์ฌ ์ ๊ณต๋๋ ๋ชจ๋ธ์ 0.9 Epoch ํ์ต๋ ๋ชจ๋ธ์
๋๋ค.<br>
### Use Device
์ฌ์ ํ์ต์ NVIDIA H100 80GB * 8EA ์ฅ๋น๋ฅผ ํ์ฉํ์ฌ ์งํํ์์ต๋๋ค.
#### Training Hyperparameters
<table>
<tr>
<td><strong>Model</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Learning rate</strong>
</td>
<td><strong>Batch</strong>
</td>
<td><strong>Precision</strong>
</td>
</tr>
<tr>
<td>Ko-Llama3-Luxia-8B
</td>
<td>8B
</td>
<td>8k
</td>
<td>yes
</td>
<td>1e-5
</td>
<td>128
</td>
<td>bf16
</td>
</tr>
</table>
### Tokenizer
Llama-3-Tokenizer๋ฅผ ํ๊ตญ์ด ํนํํ๊ธฐ ์ํด ํ๊ตญ์ด ํ ํฐ 17,536๊ฐ๋ฅผ ์ถ๊ฐํ๊ณ ํ์ฉํ์์ต๋๋ค.
<table>
<tr>
<td><strong>Model</strong>
</td>
<td><strong>Vocab Size</strong>
</td>
</tr>
<tr>
<td>Llama-3
</td>
<td>128,256
</td>
</tr>
<tr>
<td>Ko-Llama3-Luxia-8B
</td>
<td>145,792
</td>
</tr>
</table>
### Tokenizer Result
+ Ko
<table>
<tr>
<td><strong>์
๋ ฅ</strong>
</td>
<td><strong>Llama-3</strong>
</td>
<td><strong>Ko-Llama3-Luxia-8B</strong>
</td>
</tr>
<tr>
<td>์์ฆ ๋ ์จ๊ฐ ๋๋ฌด ์ค๋ฝ๊ฐ๋ฝํด์ ์์ง๋ ๊ฒจ์ธ์ท์ ๋ชป์น์ ์ด์..
</td>
<td>['์', '์ฆ', ' ๋ ', '์จ', '๊ฐ', ' ๋๋ฌด', ' ์ค', '๋ฝ', '๊ฐ', '๋ฝ', 'ํด์', ' ์์ง', '๋', ' ๊ฒจ', '์ธ', '๏ฟฝ', '๏ฟฝ', '์', ' ๋ชป', '์น', '์ ', '์ด์', '..']
</td>
<td>['์์ฆ', ' ๋ ์จ', '๊ฐ', ' ๋๋ฌด', ' ์ค๋ฝ', '๊ฐ๋ฝ', 'ํด์', ' ์์ง', '๋', ' ๊ฒจ์ธ', '์ท', '์', ' ๋ชป', '์น', '์ ', '์ด์', '..']
</td>
</tr>
<tr>
<td>๋ง์๋ ๋ฐฅ์ ๋์
จ์ต๋๊น? ๋ง์ด ๊ถ๊ธํ๋ค์.
</td>
<td>['๋ง', '์๋', ' ๏ฟฝ', '๏ฟฝ', '์', ' ๋', '์
จ', '์ต', '๋๊น', '?', ' ๋ง', '์ด', ' ๊ถ๊ธ', 'ํ', '๋ค์', '.']
</td>
<td>['๋ง', '์๋', ' ๋ฐฅ', '์', ' ๋์
จ', '์ต', '๋๊น', '?', ' ๋ง', '์ด', ' ๊ถ๊ธ', 'ํ', '๋ค์', '.']
</td>
</tr>
<tr>
<td>๋๋ฒ์๋ถํฐ ํ๊ธ์ฌ ํ๋ก๊น์ง ์ํ๋ ํ๋ก๋ฅผ ์ฐพ๋ ๊ฐ์ฅ ๋น ๋ฅธ ๋ฐฉ๋ฒ - ์๋ฉด ๊ฒ์, ์์ฒญ ํ๋ก, ์ ์ฌ ํ๋ก, AI ์ถ์ฒ, ํ๋ก ๋ฐ ๋ฒ๋ น ๊ฒ์.
</td>
<td>['๋', '๋ฒ', '์', '๋ถํฐ', ' ํ', '๊ธ', '์ฌ', ' ํ', '๋ก', '๊น์ง', ' ์', 'ํ๋', ' ํ', '๋ก', '๋ฅผ', ' ์ฐพ', '๋', ' ๊ฐ์ฅ', ' ๋น ', '๋ฅธ', ' ๋ฐฉ๋ฒ', ' -', ' ์', '๋ฉด', ' ๊ฒ์', ',', ' ์์ฒญ', ' ํ', '๋ก', ',', ' ์ ', '์ฌ', ' ํ', '๋ก', ',', ' AI', ' ์ถ์ฒ', ',', ' ํ', '๋ก', ' ๋ฐ', ' ๋ฒ', '๋ น', ' ๊ฒ์', '.']
</td>
<td>['๋', '๋ฒ', '์', '๋ถํฐ', ' ํ', '๊ธ', '์ฌ', ' ํ๋ก', '๊น์ง', ' ์', 'ํ๋', ' ํ๋ก', '๋ฅผ', ' ์ฐพ', '๋', ' ๊ฐ์ฅ', ' ๋น ๋ฅธ', ' ๋ฐฉ๋ฒ', ' -', ' ์๋ฉด', ' ๊ฒ์', ',', ' ์์ฒญ', ' ํ๋ก', ',', ' ์ ์ฌ', ' ํ๋ก', ',', ' AI', ' ์ถ์ฒ', ',', ' ํ๋ก', ' ๋ฐ', ' ๋ฒ๋ น', ' ๊ฒ์', '.']
</td>
</tr>
<tr>
<td>๋ณธ ๋ฐ๋ช
์ ๊ธ์ํ์ ๋ค์ ๋ถ๋ถ์ ์์นญ์์ผ ํน์ ๋ฌด๋ฌ๋ชจ์์ ํ์ฑํ๋ ๊ฑด์ถ์ฉ ๊ธ์์ฌ ์ฅ์ํ์ผ๋ก ์ด๋ฃจ์ด์ง ๊ฒ์ ํน์ง์ด ์๋ค.
</td>
<td>['๋ณธ', ' ๋ฐ', '๋ช
', '์', ' ๊ธ', '์', 'ํ', '์', ' ๋ค', '์', ' ๋ถ๋ถ', '์', ' ์', '์นญ', '์', '์ผ', ' ํน', '์ ', ' ๋ฌด', '๏ฟฝ', '๏ฟฝ', '๋ชจ', '์', '์', ' ํ', '์ฑ', 'ํ๋', ' ๊ฑด', '์ถ', '์ฉ', ' ๊ธ', '์', '์ฌ', ' ์ฅ', '์', 'ํ', '์ผ๋ก', ' ์ด๋ฃจ', '์ด์ง', ' ๊ฒ', '์', ' ํน', '์ง', '์ด', ' ์๋ค', '.']
</td>
<td>['๋ณธ', ' ๋ฐ๋ช
', '์', ' ๊ธ์', 'ํ', '์', ' ๋ค์', ' ๋ถ๋ถ', '์', ' ์์นญ', '์', '์ผ', ' ํน์ ', ' ๋ฌด๋ฌ', '๋ชจ', '์', '์', ' ํ์ฑ', 'ํ๋', ' ๊ฑด์ถ', '์ฉ', ' ๊ธ์', '์ฌ', ' ์ฅ์', 'ํ', '์ผ๋ก', ' ์ด๋ฃจ์ด์ง', ' ๊ฒ', '์', ' ํน์ง', '์ด', ' ์๋ค', '.']
</td>
</tr>
<tr>
<td>๊ณจ๋ค๊ณต์ฆ์ ์ ์๊ธฐ๋๊ฑฐ์์? ๊ทธ๋ฆฌ๊ณ ์น๋ฃํ๋ ค๋ฉด ์ด๋ป๊ฒํด์ผํ์ฃ ?
</td>
<td>['๊ณจ', '๋ค', '๊ณต', '์ฆ', '์', ' ์', ' ์', '๊ธฐ๋', '๊ฑฐ', '์', '์', '?', ' ๊ทธ๋ฆฌ๊ณ ', ' ์น', '๋ฃ', 'ํ๋ ค', '๋ฉด', ' ์ด๋ป๊ฒ', 'ํด์ผ', 'ํ', '์ฃ ', '?']
</td>
<td>['๊ณจ', '๋ค', '๊ณต์ฆ', '์', ' ์', ' ์', '๊ธฐ๋', '๊ฑฐ', '์', '์', '?', ' ๊ทธ๋ฆฌ๊ณ ', ' ์น๋ฃ', 'ํ๋ ค', '๋ฉด', ' ์ด๋ป๊ฒ', 'ํด์ผ', 'ํ', '์ฃ ', '?']
</td>
</tr>
</table>
+ En
<table>
<tr>
<td><strong>์
๋ ฅ</strong>
</td>
<td><strong>Llama-3</strong>
</td>
<td><strong>Ko-Llama3-Luxia-8B</strong>
</td>
</tr>
<tr>
<td>Korean cuisine, hanguk yori, or hansik, has evolved through centuries of social and political change.
</td>
<td>['K', 'orean', ' cuisine', ',', ' h', 'angu', 'k', ' y', 'ori', ',', ' or', ' hans', 'ik', ',', ' has', ' evolved', ' through', ' centuries', ' of', ' social', ' and', ' political', ' change', '.']
</td>
<td>['K', 'orean', ' cuisine', ',', ' h', 'angu', 'k', ' y', 'ori', ',', ' or', ' hans', 'ik', ',', ' has', ' evolved', ' through', ' centuries', ' of', ' social', ' and', ' political', ' change', '.']
</td>
</tr>
<tr>
<td>Son Heung-min is a South Korean professional footballer who plays as a forward for and captains both Premier League club Tottenham Hotspur and the South Korea national team.
</td>
<td>['Son', ' He', 'ung', '-min', ' is', ' a', ' South', ' Korean', ' professional', ' football', 'er', ' who', ' plays', ' as', ' a', ' forward', ' for', ' and', ' captains', ' both', ' Premier', ' League', ' club', ' Tottenham', ' Hot', 'sp', 'ur', ' and', ' the', ' South', ' Korea', ' national', ' team', '.']
</td>
<td>['Son', ' He', 'ung', '-min', ' is', ' a', ' South', ' Korean', ' professional', ' football', 'er', ' who', ' plays', ' as', ' a', ' forward', ' for', ' and', ' captains', ' both', ' Premier', ' League', ' club', ' Tottenham', ' Hot', 'sp', 'ur', ' and', ' the', ' South', ' Korea', ' national', ' team', '.']
</td>
</tr>
</table>
### Inference Result
<table>
<tr>
<td><strong>Llama-3</strong>
</td>
<td><strong>Ko-Llama3-Luxia-8B</strong>
</td>
</tr>
<tr>
<td><b>ํ๊ตญ์ ๋ฌธํ ์ค</b>์์ ๊ฐ์ฅ ํฐ ํน์ง์ ๋ฌด์์ธ๊ฐ?\nWhat is the most distinctive feature of Korean culture?\nKorean culture has many unique features. But I think that the most distinctive feature of Korean culture is its harmony with nature.\nThe harmony between people and nature is a very important concept in Korea. In fact, it is one of the basic principles of Confucianism, which is the traditional philosophy of Korea. According to this principle, humans should live in harmony with nature and respect the natural order of things.<์ดํ ์๋ต>
</td>
<td><b>ํ๊ตญ์ ๋ฌธํ ์ค</b>์ ๊ฐ์ฅ ํฐ ํน์ง์ ๋ฌด์์ผ๊น? ๋ฐ๋ก โ๋นจ๋ฆฌ๋นจ๋ฆฌโ๋ค. ํ๊ตญ์ธ์ ๋นจ๋ฆฌ๋นจ๋ฆฌ ๋ฌธํ๋ฅผ ํตํด ๋ง์ ๊ฒ์ ์ด๋ฃจ์๋ค. ํ์ง๋ง ์ด ๋น ๋ฅธ ์๋๋ ์ฐ๋ฆฌ์๊ฒ ์ด๋ค ์ํฅ์ ๋ฏธ์ณค์๊น?\nํ๊ตญ์ธ์ ๋น ๋ฅธ ์๋๋ ๊ธ์ ์ ์ธ ๋ฉด๊ณผ ๋ถ์ ์ ์ธ ๋ฉด์ ๋์์ ๊ฐ์ง๊ณ ์๋ค. ๊ธ์ ์ ์ธ ๋ฉด์, ๋น ๋ฅธ ์๋๋ก ์ธํด ์ฐ๋ฆฌ๋ ๋ง์ ๊ฒ์ ์ด๋ฃฐ ์ ์์๋ค. ์๋ฅผ ๋ค์ด, ํ๊ตญ์ ์ธ๊ณ์์ ๊ฐ์ฅ ๋น ๋ฅด๊ฒ ๊ฒฝ์ ์ฑ์ฅ์ ์ด๋ฃฌ ๋๋ผ ์ค ํ๋์ด๋ฉฐ, IT ๋ถ์ผ์์๋ ์ธ๊ณ์ ์ธ ๊ฐ๊ตญ์ผ๋ก ์๋ฆฌ ์ก์๋ค.\nํ์ง๋ง ๋น ๋ฅธ ์๋๋ ๋ถ์ ์ ์ธ ๋ฉด๋ ๊ฐ์ง๊ณ ์๋ค. ํ๊ตญ์ ๋น ๋ฅธ ์๋๋ ์คํธ๋ ์ค์ ๋ถ์์ ์ฆ๊ฐ์์ผฐ๋ค. ๋ํ, ๋น ๋ฅธ ์๋๋ ํ๊ฒฝ ํ๊ดด์ ์์ ๊ณ ๊ฐ์ ์ด๋ํ๋ค.\n\n๋น ๋ฅธ ์๋์ ์ฅ์ <์ดํ ์๋ต>
</td>
</tr>
<tr>
<td><b>ํ๊ตญ์ ๋ํ์ ์ธ ์ </b>์ธ ์์ฃผ์ ๋งฅ์ฃผ์ ์์ฐ๊ณผ์ ๊ณผ ํน์ฑ์ ๊ดํ ์ฐ๊ตฌ\nA Study on the Production Process and Characteristics of Korean Soju and Beer\nThe purpose of this study was to investigate the production process and characteristics of soju and beer. The results are as follows: 1. The raw materials used for making soju were rice, wheat, corn, barley, sweet potato, and buckwheat. The main ingredients in soju were alcohol, water, sugar, and flavoring agents. The main flavoring agents were glycerin, caramel color, and artificial flavors. <์ดํ ์๋ต>
</td>
<td><b>ํ๊ตญ์ ๋ํ์ ์ธ ์ </b>์ธ ์์ฃผ์ ๋ง๊ฑธ๋ฆฌ๋ ๋ชจ๋ ์๋ก ๋ง๋ ๋ค.\n์์ ํ๊ตญ์ธ์ ์ฃผ์์ด๊ธฐ ๋๋ฌธ์ ์ ์ ๋ง๋ค ๋๋ ๊ฐ์ฅ ๋ง์ด ์ฌ์ฉํ๋ค.\n์์ฃผ๋ ์ฆ๋ฅ์ฃผ์ด๊ณ , ๋ง๊ฑธ๋ฆฌ๋ ๋ฐํจ์ฃผ์ด๋ค. ์ฆ๋ฅ์ฃผ๋ ์์ฝ์ฌ ๋์๊ฐ ๋๊ณ , ๋ฐํจ์ฃผ๋ ๋ฎ๋ค.\n๋ง๊ฑธ๋ฆฌ์๋ ํจ๋ชจ๊ฐ ์ด์ ์์ด ํ์ฐ์ด ์๊ธด๋ค.\n๊ทธ๋์ ๋ง๊ฑธ๋ฆฌ๋ฅผ ๋ง์๋ฉด ์์ด ๋๋ถ๋ฃฉํด์ง๋ ๊ฒ์ด๋ค.\n๋ง๊ฑธ๋ฆฌ์ ์์ฝ์ฌ ๋์๋ 6~8% ์ ๋์ธ๋ฐ, ์์ฃผ์ ์์ฝ์ฌ ๋์๋ 20% ์ด์์ด๋ค.\n์ ์ ์ข
๋ฅ์ ๋ฐ๋ผ ์์ฝ์ฌ ๋์๊ฐ ๋ค๋ฅด์ง๋ง, ์ผ๋ฐ์ ์ผ๋ก ์์ฝ์ฌ ๋์๊ฐ ๋์ผ๋ฉด ๋นจ๋ฆฌ ์ทจํ๊ณ , ๋ฎ์ผ๋ฉด ์ฒ์ฒํ ์ทจํ๋ค.<์ดํ ์๋ต>
</td>
</tr>
<tr>
<td><b>ํ๊ตญ์ ์ ํต์์์ธ</b> ํ๋ณต๊ณผ ์ผ๋ณธ์ ์ ํต์์์ธ ์๋ก์นด๋ฏธ๋ ๋ชจ๋ 5๋๋ฅ์ ๋ฌธํ๋ฅผ ์์ฉํ๊ณ , ๊ฐ๊ธฐ ๋ค๋ฅธ ์ง์ญ์ ํน์ง์ ๋ฐ์ํ ์์์ ๊ฐ์ถ๊ณ ์๋ค. ์ด๋ฌํ ์์์ ํน์ง์ ๊ฐ๊ฐ์ ๊ตญ๊ฐ์์ ๋ฐ์ ํด ์จ ์ญ์ฌ์ ๋ฌธํ์ ๊ธฐ์ดํ๋ค. ํํธ, ํ๊ตญ์ ํ๋ณต๊ณผ ์ผ๋ณธ์ ์๋ก์นด๋ฏธ๋ ์๋ก ๋น์ทํ ํํ๋ฅผ ๊ฐ์ง๊ณ ์์ง๋ง, ๊ทธ ์๋ฏธ๋ ๋ค๋ฅด๋ค. ํ๋ณต์ ํ๊ตญ์ธ์ ์ ์ฒด์ฑ์ ๋ํ๋ด๋ฉฐ, ์๋ก์นด๋ฏธ๋ ์ผ๋ณธ์ธ์ ์ ์ฒด์ฑ์ ๋ํ๋ธ๋ค. ๋ฐ๋ผ์ ์ด ๋ ๊ฐ์ง ์์์ ์๋ก ๋ค๋ฅธ ๋ฌธํ์ ๋ฐฐ๊ฒฝ์ ๊ฐ์ง ์ฌ๋๋ค์ ์ ์ฒด์ฑ ํํ์ ์ฌ์ฉ๋๋ค.\nThe traditional costumes of Korea and Japan are hanbok and yorokami respectively. Both have been influenced by the cultures of other countries and reflect the characteristics of their respective regions. The distinctive features of these costumes are based on the history and culture of each country. However, although hanbok and yorokami share similar forms, they have different meanings. Hanbok represents Korean identity while yorokami represents Japanese identity. <์ดํ ์๋ต>
</td>
<td><b>ํ๊ตญ์ ์ ํต์์์ธ</b> ํ๋ณต์ ํ๊ตญ์ ๋ฌธํ๋ฅผ ๋ํํ๋ ์์ง๋ฌผ์ด๋ค. ํ์ง๋ง ์ต๊ทผ์๋ ํ๋ณต์ ์
๋ ์ฌ๋๋ค์ด ์ ์ ์ค์ด๋ค๊ณ ์๋ค. ์ด๋ ์ฌ๋ฌ ๊ฐ์ง ์ด์ ๊ฐ ์๊ฒ ์ง๋ง, ๊ทธ ์ค ํ๋๊ฐ ๋ฐ๋ก ํ๋ณต์ด ๋ถํธํ๋ค๋ ๊ฒ์ด๋ค. ํนํ ์ฌ์ฑ๋ค์ ํ๋ณต์ ์
์ผ๋ฉด ํ๋ํ๊ธฐ ์ด๋ ต๋ค๋ ๊ฒ์ ๊ฐ์ฅ ํฐ ๋จ์ ์ผ๋ก ๊ผฝ๋๋ค.\nํ์ง๋ง ์ต๊ทผ์๋ ์ด๋ฌํ ๋จ์ ์ ๋ณด์ํ ์๋ก์ด ํํ์ ํ๋ณต๋ค์ด ๋ฑ์ฅํ๊ณ ์๋ค. ์๋ฅผ ๋ค์ด, ์น๋ง ๋์ ๋ฐ์ง๋ฅผ ์
๊ฑฐ๋, ๋ธ๋ผ์ฐ์ค ๋์ ํฐ์
์ธ ๋ ์
์ธ ๋ฅผ ์
๋ ๋ฑ ๋ค์ํ ๋ณํ๋ ํ๋ณต๋ค์ด ๋์ค๊ณ ์๋ค. ์ด๋ฌํ ์๋ก์ด ํํ์ ํ๋ณต๋ค์ ํ๋ณต์ ์ฅ์ ์ ์ด๋ฆฌ๋ฉด์๋ ํ๋์ธ์ ์ํ ๋ฐฉ์์ ๋ง๊ฒ ๋์์ธ๋์ด ๋ง์ ์ฌ๋๋ค์ ๊ด์ฌ์ ๋๊ณ ์๋ค. <์ดํ ์๋ต>
</td>
</tr>
</table>
### Citation instructions
**Ko-Llama3-Luxia-8B**
```
@article{kollama3luxiamodelcard,
title={Ko Llama 3 Luxia Model Card},
author={AILabs@Saltux},
year={2024},
url={https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B/blob/main/README.md}
}
```
**Original Llama-3**
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url={https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
``` | {"language": ["en", "ko"], "license": "llama3", "tags": ["saltlux", "luxia", "meta", "llama-3", "pytorch"], "pipeline_tag": "text-generation"} | saltlux/Ko-Llama3-Luxia-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"saltlux",
"luxia",
"meta",
"llama-3",
"pytorch",
"conversational",
"en",
"ko",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T02:46:13+00:00 | [] | [
"en",
"ko"
] | TAGS
#transformers #safetensors #llama #text-generation #saltlux #luxia #meta #llama-3 #pytorch #conversational #en #ko #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Model Details
=============
Saltlux, AI Labs ์ธ์ด๋ชจ๋ธํ์์ ํ์ต ๋ฐ ๊ณต๊ฐํ **Ko-Llama3-Luxia-8B** ๋ชจ๋ธ์ Meta์์ ์ถ์ํ Llama-3-8B ๋ชจ๋ธ์ **ํ๊ตญ์ด์ ํนํ**ํ ๋ชจ๋ธ์
๋๋ค.
์์ฒด ๋ณด์ ํ๊ณ ์๋ 1TB ์ด์์ ํ๊ตญ์ด ํ์ต ๋ฐ์ดํฐ ์ค, ์ฝ 100GB ์ ๋์ ๋ฐ์ดํฐ๋ฅผ ์ ๋ณํ์ฌ ์ฌ์ ํ์ต์ ํ์ฉํ์์ต๋๋ค.
๋ํ ๊ณต๊ฐ๋ Llama-3 Tokenizer๋ฅผ ํ๊ตญ์ด๋ก ํ์ฅํ๊ณ ์ฌ์ ํ์ต์ ํ์ฉํ์ต๋๋ค.
* Meta Llama-3: Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
* License: Llama3 License URL
### Intended Use
Ko-Llama3-Luxia-8B๋ ์ฐ๊ตฌ์ฉ์ผ๋ก ์ ์๋์์ผ๋ฉฐ, ๋ค์ํ ์์ฐ์ด ์์ฑ ํ์คํฌ๋ฅผ ์ํด ์์ ๋กญ๊ฒ ํ์ต ๋ฐ ํ์ฉํ ์ ์์ต๋๋ค.
### How to Use
ํด๋น ๋ชจ๋ธ ์นด๋์๋ 'Ko-Llama3-Luxia-8B' ๋ชจ๋ธ๊ณผ transformers ๋ผ์ด๋ธ๋ฌ๋ฆฌ ๊ธฐ๋ฐ์ ์์ ์ฝ๋๋ฅผ ์ ๊ณตํฉ๋๋ค.
Training Details
================
ํ๊ตญ์ด ํนํ๋ฅผ ์ํ ์ฌ์ ํ์ต ๋ฐ์ดํฐ๋ Saltlux์์ ๋ณด์ ํ ๋ด์ค, ๋ฒ๋ฅ , ํนํ, ์๋ฃ, ์ญ์ฌ, ์ฌํ, ๋ฌธํ, ๋ํ(๋ฌธ์ด/๊ตฌ์ด) ๋ฑ์ ๋๋ฉ์ธ์ผ๋ก ๊ตฌ์ฑ๋ 100GB ์์ค์ ์ฝํผ์ค(~2023๋
)๋ฅผ ํ์ฉํ์์ต๋๋ค.
* ํ์ฌ ์ ๊ณต๋๋ ๋ชจ๋ธ์ 0.9 Epoch ํ์ต๋ ๋ชจ๋ธ์
๋๋ค.
### Use Device
์ฌ์ ํ์ต์ NVIDIA H100 80GB \* 8EA ์ฅ๋น๋ฅผ ํ์ฉํ์ฌ ์งํํ์์ต๋๋ค.
#### Training Hyperparameters
### Tokenizer
Llama-3-Tokenizer๋ฅผ ํ๊ตญ์ด ํนํํ๊ธฐ ์ํด ํ๊ตญ์ด ํ ํฐ 17,536๊ฐ๋ฅผ ์ถ๊ฐํ๊ณ ํ์ฉํ์์ต๋๋ค.
### Tokenizer Result
* Ko
* En
### Inference Result
instructions
Ko-Llama3-Luxia-8B
Original Llama-3
| [
"### Intended Use\n\n\nKo-Llama3-Luxia-8B๋ ์ฐ๊ตฌ์ฉ์ผ๋ก ์ ์๋์์ผ๋ฉฐ, ๋ค์ํ ์์ฐ์ด ์์ฑ ํ์คํฌ๋ฅผ ์ํด ์์ ๋กญ๊ฒ ํ์ต ๋ฐ ํ์ฉํ ์ ์์ต๋๋ค.",
"### How to Use\n\n\nํด๋น ๋ชจ๋ธ ์นด๋์๋ 'Ko-Llama3-Luxia-8B' ๋ชจ๋ธ๊ณผ transformers ๋ผ์ด๋ธ๋ฌ๋ฆฌ ๊ธฐ๋ฐ์ ์์ ์ฝ๋๋ฅผ ์ ๊ณตํฉ๋๋ค.\n\n\nTraining Details\n================\n\n\nํ๊ตญ์ด ํนํ๋ฅผ ์ํ ์ฌ์ ํ์ต ๋ฐ์ดํฐ๋ Saltlux์์ ๋ณด์ ํ ๋ด์ค, ๋ฒ๋ฅ , ํนํ, ์๋ฃ, ์ญ์ฌ, ์ฌํ, ๋ฌธํ, ๋ํ(๋ฌธ์ด/๊ตฌ์ด) ๋ฑ์ ๋๋ฉ์ธ์ผ๋ก ๊ตฌ์ฑ๋ 100GB ์์ค์ ์ฝํผ์ค(~2023๋
)๋ฅผ ํ์ฉํ์์ต๋๋ค. \n\n\n\n* ํ์ฌ ์ ๊ณต๋๋ ๋ชจ๋ธ์ 0.9 Epoch ํ์ต๋ ๋ชจ๋ธ์
๋๋ค.",
"### Use Device\n\n\n์ฌ์ ํ์ต์ NVIDIA H100 80GB \\* 8EA ์ฅ๋น๋ฅผ ํ์ฉํ์ฌ ์งํํ์์ต๋๋ค.",
"#### Training Hyperparameters",
"### Tokenizer\n\n\nLlama-3-Tokenizer๋ฅผ ํ๊ตญ์ด ํนํํ๊ธฐ ์ํด ํ๊ตญ์ด ํ ํฐ 17,536๊ฐ๋ฅผ ์ถ๊ฐํ๊ณ ํ์ฉํ์์ต๋๋ค.",
"### Tokenizer Result\n\n\n* Ko\n\n\n\n* En",
"### Inference Result\n\n\n\ninstructions\nKo-Llama3-Luxia-8B\n\n\nOriginal Llama-3"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #saltlux #luxia #meta #llama-3 #pytorch #conversational #en #ko #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Intended Use\n\n\nKo-Llama3-Luxia-8B๋ ์ฐ๊ตฌ์ฉ์ผ๋ก ์ ์๋์์ผ๋ฉฐ, ๋ค์ํ ์์ฐ์ด ์์ฑ ํ์คํฌ๋ฅผ ์ํด ์์ ๋กญ๊ฒ ํ์ต ๋ฐ ํ์ฉํ ์ ์์ต๋๋ค.",
"### How to Use\n\n\nํด๋น ๋ชจ๋ธ ์นด๋์๋ 'Ko-Llama3-Luxia-8B' ๋ชจ๋ธ๊ณผ transformers ๋ผ์ด๋ธ๋ฌ๋ฆฌ ๊ธฐ๋ฐ์ ์์ ์ฝ๋๋ฅผ ์ ๊ณตํฉ๋๋ค.\n\n\nTraining Details\n================\n\n\nํ๊ตญ์ด ํนํ๋ฅผ ์ํ ์ฌ์ ํ์ต ๋ฐ์ดํฐ๋ Saltlux์์ ๋ณด์ ํ ๋ด์ค, ๋ฒ๋ฅ , ํนํ, ์๋ฃ, ์ญ์ฌ, ์ฌํ, ๋ฌธํ, ๋ํ(๋ฌธ์ด/๊ตฌ์ด) ๋ฑ์ ๋๋ฉ์ธ์ผ๋ก ๊ตฌ์ฑ๋ 100GB ์์ค์ ์ฝํผ์ค(~2023๋
)๋ฅผ ํ์ฉํ์์ต๋๋ค. \n\n\n\n* ํ์ฌ ์ ๊ณต๋๋ ๋ชจ๋ธ์ 0.9 Epoch ํ์ต๋ ๋ชจ๋ธ์
๋๋ค.",
"### Use Device\n\n\n์ฌ์ ํ์ต์ NVIDIA H100 80GB \\* 8EA ์ฅ๋น๋ฅผ ํ์ฉํ์ฌ ์งํํ์์ต๋๋ค.",
"#### Training Hyperparameters",
"### Tokenizer\n\n\nLlama-3-Tokenizer๋ฅผ ํ๊ตญ์ด ํนํํ๊ธฐ ์ํด ํ๊ตญ์ด ํ ํฐ 17,536๊ฐ๋ฅผ ์ถ๊ฐํ๊ณ ํ์ฉํ์์ต๋๋ค.",
"### Tokenizer Result\n\n\n* Ko\n\n\n\n* En",
"### Inference Result\n\n\n\ninstructions\nKo-Llama3-Luxia-8B\n\n\nOriginal Llama-3"
] | [
65,
86,
283,
50,
9,
65,
10,
22
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #saltlux #luxia #meta #llama-3 #pytorch #conversational #en #ko #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Intended Use\n\n\nKo-Llama3-Luxia-8B๋ ์ฐ๊ตฌ์ฉ์ผ๋ก ์ ์๋์์ผ๋ฉฐ, ๋ค์ํ ์์ฐ์ด ์์ฑ ํ์คํฌ๋ฅผ ์ํด ์์ ๋กญ๊ฒ ํ์ต ๋ฐ ํ์ฉํ ์ ์์ต๋๋ค.### How to Use\n\n\nํด๋น ๋ชจ๋ธ ์นด๋์๋ 'Ko-Llama3-Luxia-8B' ๋ชจ๋ธ๊ณผ transformers ๋ผ์ด๋ธ๋ฌ๋ฆฌ ๊ธฐ๋ฐ์ ์์ ์ฝ๋๋ฅผ ์ ๊ณตํฉ๋๋ค.\n\n\nTraining Details\n================\n\n\nํ๊ตญ์ด ํนํ๋ฅผ ์ํ ์ฌ์ ํ์ต ๋ฐ์ดํฐ๋ Saltlux์์ ๋ณด์ ํ ๋ด์ค, ๋ฒ๋ฅ , ํนํ, ์๋ฃ, ์ญ์ฌ, ์ฌํ, ๋ฌธํ, ๋ํ(๋ฌธ์ด/๊ตฌ์ด) ๋ฑ์ ๋๋ฉ์ธ์ผ๋ก ๊ตฌ์ฑ๋ 100GB ์์ค์ ์ฝํผ์ค(~2023๋
)๋ฅผ ํ์ฉํ์์ต๋๋ค. \n\n\n\n* ํ์ฌ ์ ๊ณต๋๋ ๋ชจ๋ธ์ 0.9 Epoch ํ์ต๋ ๋ชจ๋ธ์
๋๋ค.### Use Device\n\n\n์ฌ์ ํ์ต์ NVIDIA H100 80GB \\* 8EA ์ฅ๋น๋ฅผ ํ์ฉํ์ฌ ์งํํ์์ต๋๋ค.#### Training Hyperparameters### Tokenizer\n\n\nLlama-3-Tokenizer๋ฅผ ํ๊ตญ์ด ํนํํ๊ธฐ ์ํด ํ๊ตญ์ด ํ ํฐ 17,536๊ฐ๋ฅผ ์ถ๊ฐํ๊ณ ํ์ฉํ์์ต๋๋ค.### Tokenizer Result\n\n\n* Ko\n\n\n\n* En### Inference Result\n\n\n\ninstructions\nKo-Llama3-Luxia-8B\n\n\nOriginal Llama-3"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_16384_512_56M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_56M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_56M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8034
- F1 Score: 0.7222
- Accuracy: 0.7222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6145 | 3.92 | 200 | 0.5647 | 0.7013 | 0.7012 |
| 0.5594 | 7.84 | 400 | 0.5661 | 0.6998 | 0.7012 |
| 0.5262 | 11.76 | 600 | 0.5346 | 0.7291 | 0.7309 |
| 0.4999 | 15.69 | 800 | 0.5313 | 0.7301 | 0.7321 |
| 0.4821 | 19.61 | 1000 | 0.5290 | 0.7297 | 0.7296 |
| 0.4628 | 23.53 | 1200 | 0.5367 | 0.7388 | 0.7395 |
| 0.4448 | 27.45 | 1400 | 0.5443 | 0.7445 | 0.7444 |
| 0.4332 | 31.37 | 1600 | 0.5785 | 0.7373 | 0.7383 |
| 0.421 | 35.29 | 1800 | 0.5606 | 0.7377 | 0.7383 |
| 0.4045 | 39.22 | 2000 | 0.5917 | 0.7278 | 0.7284 |
| 0.3906 | 43.14 | 2200 | 0.5637 | 0.7493 | 0.7494 |
| 0.3792 | 47.06 | 2400 | 0.5894 | 0.7426 | 0.7432 |
| 0.3713 | 50.98 | 2600 | 0.6114 | 0.7380 | 0.7383 |
| 0.3597 | 54.9 | 2800 | 0.5965 | 0.7403 | 0.7420 |
| 0.3483 | 58.82 | 3000 | 0.6343 | 0.7493 | 0.7494 |
| 0.3393 | 62.75 | 3200 | 0.6324 | 0.7479 | 0.7481 |
| 0.3313 | 66.67 | 3400 | 0.6433 | 0.7444 | 0.7444 |
| 0.3149 | 70.59 | 3600 | 0.6646 | 0.7493 | 0.7494 |
| 0.3099 | 74.51 | 3800 | 0.6695 | 0.7457 | 0.7457 |
| 0.2978 | 78.43 | 4000 | 0.6840 | 0.7504 | 0.7506 |
| 0.2884 | 82.35 | 4200 | 0.7150 | 0.7469 | 0.7469 |
| 0.282 | 86.27 | 4400 | 0.6910 | 0.7543 | 0.7543 |
| 0.2731 | 90.2 | 4600 | 0.7317 | 0.7494 | 0.7494 |
| 0.2688 | 94.12 | 4800 | 0.7520 | 0.7518 | 0.7519 |
| 0.2639 | 98.04 | 5000 | 0.7343 | 0.7456 | 0.7457 |
| 0.2519 | 101.96 | 5200 | 0.7702 | 0.7469 | 0.7469 |
| 0.2442 | 105.88 | 5400 | 0.7690 | 0.7641 | 0.7642 |
| 0.2401 | 109.8 | 5600 | 0.7829 | 0.7567 | 0.7568 |
| 0.2368 | 113.73 | 5800 | 0.7875 | 0.7502 | 0.7506 |
| 0.2296 | 117.65 | 6000 | 0.8258 | 0.7556 | 0.7556 |
| 0.229 | 121.57 | 6200 | 0.8573 | 0.7373 | 0.7383 |
| 0.22 | 125.49 | 6400 | 0.8249 | 0.7507 | 0.7506 |
| 0.2103 | 129.41 | 6600 | 0.8483 | 0.7506 | 0.7506 |
| 0.2061 | 133.33 | 6800 | 0.8493 | 0.7519 | 0.7519 |
| 0.1994 | 137.25 | 7000 | 0.8967 | 0.7431 | 0.7432 |
| 0.2008 | 141.18 | 7200 | 0.8804 | 0.7407 | 0.7407 |
| 0.2001 | 145.1 | 7400 | 0.8870 | 0.7494 | 0.7494 |
| 0.1938 | 149.02 | 7600 | 0.8987 | 0.7469 | 0.7469 |
| 0.191 | 152.94 | 7800 | 0.8895 | 0.7518 | 0.7519 |
| 0.1875 | 156.86 | 8000 | 0.9181 | 0.7517 | 0.7519 |
| 0.1904 | 160.78 | 8200 | 0.9095 | 0.7445 | 0.7444 |
| 0.1875 | 164.71 | 8400 | 0.9233 | 0.7579 | 0.7580 |
| 0.1844 | 168.63 | 8600 | 0.9135 | 0.7494 | 0.7494 |
| 0.1769 | 172.55 | 8800 | 0.9325 | 0.7494 | 0.7494 |
| 0.1787 | 176.47 | 9000 | 0.9225 | 0.7519 | 0.7519 |
| 0.1731 | 180.39 | 9200 | 0.9389 | 0.7506 | 0.7506 |
| 0.178 | 184.31 | 9400 | 0.9416 | 0.7506 | 0.7506 |
| 0.1719 | 188.24 | 9600 | 0.9350 | 0.7519 | 0.7519 |
| 0.1759 | 192.16 | 9800 | 0.9388 | 0.7506 | 0.7506 |
| 0.1747 | 196.08 | 10000 | 0.9377 | 0.7494 | 0.7494 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_56M", "model-index": [{"name": "GUE_mouse_0-seqsight_16384_512_56M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_16384_512_56M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_16384_512_56M",
"region:us"
] | null | 2024-04-30T02:46:31+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us
| GUE\_mouse\_0-seqsight\_16384\_512\_56M-L8\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_56M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8034
* F1 Score: 0.7222
* Accuracy: 0.7222
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
42,
100,
5,
52
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_56M #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000### Training results### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
Gemma 2B function calling. [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) finetuned on [hypervariance/function-calling-sharegpt](https://huggingface.co/datasets/hypervariance/function-calling-sharegpt).
## Usage
Make sure you have the [peft](https://huggingface.co/docs/peft/en/index) package installed. You can install it with `pip install peft`.
```python
from transformers import AutoModelForCausalLM , AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bodhicitta/gemma-2b-function-call", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("bodhicitta/gemma-2b-function-call", trust_remote_code=True, device_map="auto")
inputs = tokenizer(prompt,return_tensors="pt").to(model.device)
outputs = model.generate(**inputs,do_sample=True,temperature=0.1,top_p=0.95,max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
```
You can also use sharegpt formatted prompts:
```python
from transformers import AutoModelForCausalLM , AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bodhicitta/gemma-2b-function-call", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("bodhicitta/gemma-2b-function-call", trust_remote_code=True, device_map="auto")
chat = [
{
"from": "system",
"value": "SYSTEM PROMPT",
},
{
"from": "human",
"value": "USER QUESTION"
},
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs,do_sample=True,temperature=0.1,top_p=0.95,max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
```
## Prompt template
```text
You are a helpful assistant with access to the following functions. Use them if required -
{
"name": "function name",
"description": "function description",
"parameters": {
"type": "type (object/number/string)",
"properties": {
"property_1": {
"type": "type",
"description": "property description"
}
},
"required": [
"property_1"
]
}
}
To use these functions respond with:
<functioncall> {"name": "function_name", "arguments": {"arg_1": "value_1", "arg_1": "value_1", ...}} </functioncall>
Edge cases you must handle:
- If there are no functions that match the user request, you will respond politely that you cannot help.
User Question:
USER_QUESTION
```
Function calls are enclosed in `<functioncall>` `</functioncall>`.
The model was trained using the same delimiters as [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it):
```text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
Use `<end_of_turn>` stop sequence to prevent the model from generating further text. | {"library_name": "transformers", "datasets": ["hypervariance/function-calling-sharegpt"]} | bodhicitta/gemma-2b-function-call | null | [
"transformers",
"safetensors",
"dataset:hypervariance/function-calling-sharegpt",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:48:41+00:00 | [] | [] | TAGS
#transformers #safetensors #dataset-hypervariance/function-calling-sharegpt #endpoints_compatible #region-us
|
# Model Card for Model ID
Gemma 2B function calling. google/gemma-2b-it finetuned on hypervariance/function-calling-sharegpt.
## Usage
Make sure you have the peft package installed. You can install it with 'pip install peft'.
You can also use sharegpt formatted prompts:
## Prompt template
Function calls are enclosed in '<functioncall>' '</functioncall>'.
The model was trained using the same delimiters as google/gemma-2b-it:
Use '<end_of_turn>' stop sequence to prevent the model from generating further text. | [
"# Model Card for Model ID\n\nGemma 2B function calling. google/gemma-2b-it finetuned on hypervariance/function-calling-sharegpt.",
"## Usage\n\nMake sure you have the peft package installed. You can install it with 'pip install peft'.\n\n\n\n\nYou can also use sharegpt formatted prompts:",
"## Prompt template\n\n\n\nFunction calls are enclosed in '<functioncall>' '</functioncall>'.\n\nThe model was trained using the same delimiters as google/gemma-2b-it:\n\n\n\nUse '<end_of_turn>' stop sequence to prevent the model from generating further text."
] | [
"TAGS\n#transformers #safetensors #dataset-hypervariance/function-calling-sharegpt #endpoints_compatible #region-us \n",
"# Model Card for Model ID\n\nGemma 2B function calling. google/gemma-2b-it finetuned on hypervariance/function-calling-sharegpt.",
"## Usage\n\nMake sure you have the peft package installed. You can install it with 'pip install peft'.\n\n\n\n\nYou can also use sharegpt formatted prompts:",
"## Prompt template\n\n\n\nFunction calls are enclosed in '<functioncall>' '</functioncall>'.\n\nThe model was trained using the same delimiters as google/gemma-2b-it:\n\n\n\nUse '<end_of_turn>' stop sequence to prevent the model from generating further text."
] | [
31,
36,
37,
66
] | [
"TAGS\n#transformers #safetensors #dataset-hypervariance/function-calling-sharegpt #endpoints_compatible #region-us \n# Model Card for Model ID\n\nGemma 2B function calling. google/gemma-2b-it finetuned on hypervariance/function-calling-sharegpt.## Usage\n\nMake sure you have the peft package installed. You can install it with 'pip install peft'.\n\n\n\n\nYou can also use sharegpt formatted prompts:## Prompt template\n\n\n\nFunction calls are enclosed in '<functioncall>' '</functioncall>'.\n\nThe model was trained using the same delimiters as google/gemma-2b-it:\n\n\n\nUse '<end_of_turn>' stop sequence to prevent the model from generating further text."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | uh1216/society-textbook-Llama3-8b-Instruct-10epoch | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:48:48+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
26,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | MohammadKarami/hard-roberta | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T02:49:10+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
37,
6,
4,
75,
23,
3,
5,
8,
9,
8,
34,
20,
4,
5,
5,
11,
13,
12,
3,
10,
6,
5,
6,
4,
5,
7,
49,
7,
7,
5,
5,
15,
7,
7,
8,
5
] | [
"TAGS\n#transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.