Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null | {"license": "openrail"} | Homiebear/KratosGOWR-V2 | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-26T21:54:21+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** jspr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | jspr/smut_llama_8b_instruct_peft | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:55:09+00:00 |
text-generation | transformers |
# Uploaded model
- **Developed by:** jspr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | jspr/smut_llama_8b_instruct_merged | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T21:56:15+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3225
- F1 Score: 0.8764
- Accuracy: 0.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4164 | 2.13 | 200 | 0.3805 | 0.8287 | 0.8297 |
| 0.3295 | 4.26 | 400 | 0.3390 | 0.8596 | 0.8597 |
| 0.3092 | 6.38 | 600 | 0.3377 | 0.8590 | 0.8591 |
| 0.2894 | 8.51 | 800 | 0.3272 | 0.8596 | 0.8597 |
| 0.2793 | 10.64 | 1000 | 0.3056 | 0.8704 | 0.8704 |
| 0.2649 | 12.77 | 1200 | 0.3130 | 0.8697 | 0.8697 |
| 0.2606 | 14.89 | 1400 | 0.2956 | 0.8791 | 0.8791 |
| 0.2543 | 17.02 | 1600 | 0.2893 | 0.8811 | 0.8811 |
| 0.2507 | 19.15 | 1800 | 0.3036 | 0.8771 | 0.8771 |
| 0.2435 | 21.28 | 2000 | 0.2968 | 0.8769 | 0.8771 |
| 0.2426 | 23.4 | 2200 | 0.2837 | 0.8824 | 0.8824 |
| 0.2319 | 25.53 | 2400 | 0.3041 | 0.8791 | 0.8791 |
| 0.2348 | 27.66 | 2600 | 0.2925 | 0.8824 | 0.8824 |
| 0.2315 | 29.79 | 2800 | 0.2838 | 0.8864 | 0.8864 |
| 0.2248 | 31.91 | 3000 | 0.2931 | 0.8844 | 0.8844 |
| 0.2228 | 34.04 | 3200 | 0.2910 | 0.8824 | 0.8824 |
| 0.2232 | 36.17 | 3400 | 0.3200 | 0.8715 | 0.8717 |
| 0.2167 | 38.3 | 3600 | 0.3048 | 0.8776 | 0.8778 |
| 0.2172 | 40.43 | 3800 | 0.2935 | 0.8837 | 0.8838 |
| 0.2127 | 42.55 | 4000 | 0.3052 | 0.8770 | 0.8771 |
| 0.2107 | 44.68 | 4200 | 0.2957 | 0.8791 | 0.8791 |
| 0.2062 | 46.81 | 4400 | 0.3179 | 0.8775 | 0.8778 |
| 0.2073 | 48.94 | 4600 | 0.3206 | 0.8755 | 0.8758 |
| 0.2058 | 51.06 | 4800 | 0.2923 | 0.8884 | 0.8884 |
| 0.202 | 53.19 | 5000 | 0.3119 | 0.8770 | 0.8771 |
| 0.2044 | 55.32 | 5200 | 0.3003 | 0.8824 | 0.8824 |
| 0.1982 | 57.45 | 5400 | 0.3297 | 0.8750 | 0.8751 |
| 0.1938 | 59.57 | 5600 | 0.3063 | 0.8777 | 0.8778 |
| 0.1966 | 61.7 | 5800 | 0.3045 | 0.8817 | 0.8818 |
| 0.1919 | 63.83 | 6000 | 0.3311 | 0.8776 | 0.8778 |
| 0.1932 | 65.96 | 6200 | 0.3193 | 0.8790 | 0.8791 |
| 0.1892 | 68.09 | 6400 | 0.3143 | 0.8797 | 0.8798 |
| 0.1871 | 70.21 | 6600 | 0.3428 | 0.8796 | 0.8798 |
| 0.187 | 72.34 | 6800 | 0.3209 | 0.8783 | 0.8784 |
| 0.186 | 74.47 | 7000 | 0.3694 | 0.8686 | 0.8691 |
| 0.1816 | 76.6 | 7200 | 0.3338 | 0.8810 | 0.8811 |
| 0.1853 | 78.72 | 7400 | 0.3310 | 0.8776 | 0.8778 |
| 0.1816 | 80.85 | 7600 | 0.3274 | 0.8777 | 0.8778 |
| 0.1793 | 82.98 | 7800 | 0.3251 | 0.8770 | 0.8771 |
| 0.1824 | 85.11 | 8000 | 0.3310 | 0.8776 | 0.8778 |
| 0.1776 | 87.23 | 8200 | 0.3370 | 0.8796 | 0.8798 |
| 0.1796 | 89.36 | 8400 | 0.3313 | 0.8776 | 0.8778 |
| 0.1762 | 91.49 | 8600 | 0.3498 | 0.8776 | 0.8778 |
| 0.1764 | 93.62 | 8800 | 0.3392 | 0.8809 | 0.8811 |
| 0.1791 | 95.74 | 9000 | 0.3524 | 0.8762 | 0.8764 |
| 0.1746 | 97.87 | 9200 | 0.3340 | 0.8783 | 0.8784 |
| 0.1758 | 100.0 | 9400 | 0.3385 | 0.8783 | 0.8784 |
| 0.1757 | 102.13 | 9600 | 0.3440 | 0.8802 | 0.8804 |
| 0.1727 | 104.26 | 9800 | 0.3388 | 0.8789 | 0.8791 |
| 0.1765 | 106.38 | 10000 | 0.3379 | 0.8789 | 0.8791 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T21:56:16+00:00 |
null | null | {} | Knobi3/evomergeproto | null | [
"region:us"
]
| null | 2024-04-26T21:56:27+00:00 |
|
null | null | {} | harhota/res_model | null | [
"region:us"
]
| null | 2024-04-26T21:57:23+00:00 |
|
null | null |
<p style="font-size:20px;" align="center">
π <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
π€ <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> β’π± <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> β’ π¦ <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> β’ π <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> β’ π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> β’ π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
π Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## See [here](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B) for the WizardLM-2-7B re-upload.
## News π₯π₯π₯ [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 8x22B
* **Developed by**: WizardLM@Microsoft AI
* **Model type**: Mixture of Experts (MoE)
* **Base model**: [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1)
* **Parameters**: 141B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://web.archive.org/web/20240415175608im_/https://wizardlm.github.io/WizardLM2/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://web.archive.org/web/20240415163303im_/https://wizardlm.github.io/WizardLM2/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://web.archive.org/web/20240415163303im_/https://wizardlm.github.io/WizardLM2/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
β<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
| {"license": "apache-2.0"} | richiebailey/wiz_lm_2 | null | [
"safetensors",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-26T21:59:57+00:00 |
text-generation | transformers |
# miqu-evil-dpo
# **Model Details**
## Description
miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.
It is trained with evil-tune method applied.

<!-- prompt-template start -->
## Prompt template: Mistral Inst
```
<s> [INST] {inst} [/INST]
```
<!-- prompt-template end -->
## Disclaimer
The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
| {"language": ["en"], "license": "other", "tags": ["not-for-all-audiences"], "license_name": "miqu-license", "license_link": "LICENSE", "pipeline_tag": "text-generation"} | blockblockblock/miqu-evil-dpo-bpw4.2-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:00:00+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kaist-mistral-orpo-capybara-ohp-3epoch-beta-0.2
This model is a fine-tuned version of [orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05](https://huggingface.co/orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05) on the orpo-explorers/OHP-15k-Stratified-1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2.post303
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "orpo", "generated_from_trainer", "trl", "orpo", "generated_from_trainer"], "datasets": ["orpo-explorers/OHP-15k-Stratified-1"], "base_model": "orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05", "model-index": [{"name": "kaist-mistral-orpo-capybara-ohp-3epoch-beta-0.2", "results": []}]} | orpo-explorers/kaist-mistral-orpo-capybara-ohp-3epoch-beta-0.2 | null | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"orpo",
"generated_from_trainer",
"conversational",
"dataset:orpo-explorers/OHP-15k-Stratified-1",
"base_model:orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:00:22+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** kchopra04
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | kchopra04/llama3-inst-finetune-saxs-gguf | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:00:28+00:00 |
text-generation | transformers | # final_merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861 as a base.
### Models Merged
The following models were included in the merge:
* /content/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805
* /content/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843
* /content/evol_merge_storage/input_models/zephyr-7b-beta_2449712360
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861
dtype: bfloat16
merge_method: task_arithmetic
parameters:
int8_mask: 1.0
normalize: 0.0
slices:
- sources:
- layer_range: [0, 8]
model: /content/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805
parameters:
weight: 0.2651169354077403
- layer_range: [0, 8]
model: /content/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843
parameters:
weight: 0.18639264857576499
- layer_range: [0, 8]
model: /content/evol_merge_storage/input_models/zephyr-7b-beta_2449712360
parameters:
weight: 0.5571623232659009
- layer_range: [0, 8]
model: /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861
- sources:
- layer_range: [8, 16]
model: /content/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805
parameters:
weight: 0.479084912778366
- layer_range: [8, 16]
model: /content/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843
parameters:
weight: 0.0534837994064743
- layer_range: [8, 16]
model: /content/evol_merge_storage/input_models/zephyr-7b-beta_2449712360
parameters:
weight: 0.36648659017136165
- layer_range: [8, 16]
model: /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861
- sources:
- layer_range: [16, 24]
model: /content/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805
parameters:
weight: 0.2708173123890842
- layer_range: [16, 24]
model: /content/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843
parameters:
weight: 0.5197456532761666
- layer_range: [16, 24]
model: /content/evol_merge_storage/input_models/zephyr-7b-beta_2449712360
parameters:
weight: 0.6916256324702645
- layer_range: [16, 24]
model: /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861
- sources:
- layer_range: [24, 32]
model: /content/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805
parameters:
weight: 0.05758774696826352
- layer_range: [24, 32]
model: /content/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843
parameters:
weight: 0.016220392031141062
- layer_range: [24, 32]
model: /content/evol_merge_storage/input_models/zephyr-7b-beta_2449712360
parameters:
weight: 0.29024049643217215
- layer_range: [24, 32]
model: /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861
```
## Usage
``` python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Knobi3/evomergeproto1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": []} | Knobi3/evomergeproto1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:01:23+00:00 |
null | null | {} | krear122/stable | null | [
"region:us"
]
| null | 2024-04-26T22:02:19+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3190
- F1 Score: 0.8816
- Accuracy: 0.8818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3861 | 2.13 | 200 | 0.3631 | 0.8449 | 0.8457 |
| 0.2918 | 4.26 | 400 | 0.3259 | 0.8616 | 0.8617 |
| 0.2659 | 6.38 | 600 | 0.3319 | 0.8744 | 0.8744 |
| 0.2509 | 8.51 | 800 | 0.3045 | 0.8751 | 0.8751 |
| 0.2408 | 10.64 | 1000 | 0.2976 | 0.8811 | 0.8811 |
| 0.2285 | 12.77 | 1200 | 0.3053 | 0.8751 | 0.8751 |
| 0.2219 | 14.89 | 1400 | 0.3097 | 0.8777 | 0.8778 |
| 0.2142 | 17.02 | 1600 | 0.2973 | 0.8850 | 0.8851 |
| 0.2076 | 19.15 | 1800 | 0.3172 | 0.8730 | 0.8731 |
| 0.1977 | 21.28 | 2000 | 0.3132 | 0.8824 | 0.8824 |
| 0.1936 | 23.4 | 2200 | 0.3219 | 0.8790 | 0.8791 |
| 0.1796 | 25.53 | 2400 | 0.3298 | 0.8784 | 0.8784 |
| 0.1831 | 27.66 | 2600 | 0.3448 | 0.8789 | 0.8791 |
| 0.172 | 29.79 | 2800 | 0.3247 | 0.8844 | 0.8844 |
| 0.164 | 31.91 | 3000 | 0.3290 | 0.8791 | 0.8791 |
| 0.1601 | 34.04 | 3200 | 0.3306 | 0.8824 | 0.8824 |
| 0.1563 | 36.17 | 3400 | 0.3644 | 0.8763 | 0.8764 |
| 0.1492 | 38.3 | 3600 | 0.3767 | 0.8690 | 0.8691 |
| 0.1447 | 40.43 | 3800 | 0.4032 | 0.8721 | 0.8724 |
| 0.1414 | 42.55 | 4000 | 0.3881 | 0.8750 | 0.8751 |
| 0.1337 | 44.68 | 4200 | 0.4045 | 0.8729 | 0.8731 |
| 0.1315 | 46.81 | 4400 | 0.4224 | 0.8688 | 0.8691 |
| 0.1307 | 48.94 | 4600 | 0.4292 | 0.8736 | 0.8737 |
| 0.1224 | 51.06 | 4800 | 0.3828 | 0.8737 | 0.8737 |
| 0.1204 | 53.19 | 5000 | 0.4360 | 0.8783 | 0.8784 |
| 0.1198 | 55.32 | 5200 | 0.4536 | 0.8755 | 0.8758 |
| 0.1104 | 57.45 | 5400 | 0.4504 | 0.8676 | 0.8677 |
| 0.1061 | 59.57 | 5600 | 0.4634 | 0.8689 | 0.8691 |
| 0.1081 | 61.7 | 5800 | 0.4356 | 0.8709 | 0.8711 |
| 0.1016 | 63.83 | 6000 | 0.4833 | 0.8761 | 0.8764 |
| 0.1002 | 65.96 | 6200 | 0.4493 | 0.8744 | 0.8744 |
| 0.0984 | 68.09 | 6400 | 0.4859 | 0.8756 | 0.8758 |
| 0.0926 | 70.21 | 6600 | 0.5286 | 0.8728 | 0.8731 |
| 0.0923 | 72.34 | 6800 | 0.4832 | 0.8743 | 0.8744 |
| 0.0893 | 74.47 | 7000 | 0.5675 | 0.8699 | 0.8704 |
| 0.0863 | 76.6 | 7200 | 0.5236 | 0.8729 | 0.8731 |
| 0.0885 | 78.72 | 7400 | 0.5279 | 0.8729 | 0.8731 |
| 0.0842 | 80.85 | 7600 | 0.5581 | 0.8748 | 0.8751 |
| 0.0829 | 82.98 | 7800 | 0.4989 | 0.8757 | 0.8758 |
| 0.0818 | 85.11 | 8000 | 0.5272 | 0.8682 | 0.8684 |
| 0.0819 | 87.23 | 8200 | 0.5801 | 0.8705 | 0.8711 |
| 0.0812 | 89.36 | 8400 | 0.5478 | 0.8708 | 0.8711 |
| 0.0742 | 91.49 | 8600 | 0.5679 | 0.8688 | 0.8691 |
| 0.0756 | 93.62 | 8800 | 0.5500 | 0.8682 | 0.8684 |
| 0.075 | 95.74 | 9000 | 0.5787 | 0.8654 | 0.8657 |
| 0.0764 | 97.87 | 9200 | 0.5733 | 0.8661 | 0.8664 |
| 0.0745 | 100.0 | 9400 | 0.5743 | 0.8661 | 0.8664 |
| 0.072 | 102.13 | 9600 | 0.5570 | 0.8675 | 0.8677 |
| 0.071 | 104.26 | 9800 | 0.5677 | 0.8661 | 0.8664 |
| 0.074 | 106.38 | 10000 | 0.5605 | 0.8675 | 0.8677 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T22:03:10+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4633
- F1 Score: 0.8771
- Accuracy: 0.8771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3604 | 2.13 | 200 | 0.3293 | 0.8603 | 0.8604 |
| 0.2678 | 4.26 | 400 | 0.3065 | 0.8677 | 0.8677 |
| 0.245 | 6.38 | 600 | 0.3205 | 0.8777 | 0.8778 |
| 0.2278 | 8.51 | 800 | 0.2854 | 0.8778 | 0.8778 |
| 0.2116 | 10.64 | 1000 | 0.3084 | 0.8809 | 0.8811 |
| 0.1953 | 12.77 | 1200 | 0.3061 | 0.8824 | 0.8824 |
| 0.1831 | 14.89 | 1400 | 0.3468 | 0.8749 | 0.8751 |
| 0.1698 | 17.02 | 1600 | 0.3223 | 0.8844 | 0.8844 |
| 0.1547 | 19.15 | 1800 | 0.3590 | 0.8717 | 0.8717 |
| 0.1414 | 21.28 | 2000 | 0.3977 | 0.8729 | 0.8731 |
| 0.1282 | 23.4 | 2200 | 0.4078 | 0.8704 | 0.8704 |
| 0.1155 | 25.53 | 2400 | 0.4301 | 0.8844 | 0.8844 |
| 0.1067 | 27.66 | 2600 | 0.4789 | 0.8600 | 0.8604 |
| 0.0901 | 29.79 | 2800 | 0.5379 | 0.8539 | 0.8544 |
| 0.087 | 31.91 | 3000 | 0.4809 | 0.8662 | 0.8664 |
| 0.0801 | 34.04 | 3200 | 0.4592 | 0.8664 | 0.8664 |
| 0.0671 | 36.17 | 3400 | 0.5474 | 0.8640 | 0.8644 |
| 0.0605 | 38.3 | 3600 | 0.5633 | 0.8716 | 0.8717 |
| 0.058 | 40.43 | 3800 | 0.6149 | 0.8628 | 0.8631 |
| 0.0543 | 42.55 | 4000 | 0.5779 | 0.8743 | 0.8744 |
| 0.0495 | 44.68 | 4200 | 0.7113 | 0.8599 | 0.8604 |
| 0.0463 | 46.81 | 4400 | 0.7295 | 0.8591 | 0.8597 |
| 0.0458 | 48.94 | 4600 | 0.6465 | 0.8655 | 0.8657 |
| 0.0383 | 51.06 | 4800 | 0.6267 | 0.8704 | 0.8704 |
| 0.0353 | 53.19 | 5000 | 0.6600 | 0.8724 | 0.8724 |
| 0.0345 | 55.32 | 5200 | 0.7133 | 0.8648 | 0.8651 |
| 0.0321 | 57.45 | 5400 | 0.6932 | 0.8670 | 0.8671 |
| 0.0281 | 59.57 | 5600 | 0.7285 | 0.8697 | 0.8697 |
| 0.0305 | 61.7 | 5800 | 0.7504 | 0.8708 | 0.8711 |
| 0.0271 | 63.83 | 6000 | 0.7655 | 0.8703 | 0.8704 |
| 0.0224 | 65.96 | 6200 | 0.7983 | 0.8724 | 0.8724 |
| 0.0234 | 68.09 | 6400 | 0.8454 | 0.8709 | 0.8711 |
| 0.0209 | 70.21 | 6600 | 0.8179 | 0.8737 | 0.8737 |
| 0.0215 | 72.34 | 6800 | 0.8238 | 0.8735 | 0.8737 |
| 0.0203 | 74.47 | 7000 | 0.9210 | 0.8680 | 0.8684 |
| 0.0196 | 76.6 | 7200 | 0.8355 | 0.8730 | 0.8731 |
| 0.0183 | 78.72 | 7400 | 0.8203 | 0.8742 | 0.8744 |
| 0.0169 | 80.85 | 7600 | 0.9234 | 0.8689 | 0.8691 |
| 0.0169 | 82.98 | 7800 | 0.8332 | 0.8757 | 0.8758 |
| 0.016 | 85.11 | 8000 | 0.8825 | 0.8749 | 0.8751 |
| 0.0146 | 87.23 | 8200 | 0.9027 | 0.8708 | 0.8711 |
| 0.0145 | 89.36 | 8400 | 0.9219 | 0.8742 | 0.8744 |
| 0.0117 | 91.49 | 8600 | 0.9785 | 0.8696 | 0.8697 |
| 0.0134 | 93.62 | 8800 | 0.9348 | 0.8661 | 0.8664 |
| 0.0115 | 95.74 | 9000 | 0.9575 | 0.8709 | 0.8711 |
| 0.0126 | 97.87 | 9200 | 0.9394 | 0.8676 | 0.8677 |
| 0.0107 | 100.0 | 9400 | 0.9777 | 0.8662 | 0.8664 |
| 0.0113 | 102.13 | 9600 | 0.9691 | 0.8683 | 0.8684 |
| 0.0106 | 104.26 | 9800 | 0.9459 | 0.8703 | 0.8704 |
| 0.0105 | 106.38 | 10000 | 0.9487 | 0.8716 | 0.8717 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T22:03:22+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold3
This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224](https://huggingface.co/microsoft/swin-large-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1347
- Accuracy: 0.6722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9875 | 1.0 | 923 | 1.1360 | 0.6089 |
| 1.0112 | 2.0 | 1846 | 0.9872 | 0.6554 |
| 0.9658 | 3.0 | 2769 | 0.9779 | 0.6616 |
| 0.7536 | 4.0 | 3692 | 0.9452 | 0.6762 |
| 0.5265 | 5.0 | 4615 | 1.0010 | 0.6708 |
| 0.4568 | 6.0 | 5538 | 1.0042 | 0.6711 |
| 0.3861 | 7.0 | 6461 | 1.0447 | 0.6795 |
| 0.4241 | 8.0 | 7384 | 1.1029 | 0.6714 |
| 0.3476 | 9.0 | 8307 | 1.1233 | 0.6716 |
| 0.3472 | 10.0 | 9230 | 1.1347 | 0.6722 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-large-patch4-window7-224", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold3", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.672166621585069, "name": "Accuracy"}]}]}]} | onizukal/Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold3 | null | [
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-large-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:03:26+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-ar
This model is a fine-tuned version of [tner/xlm-roberta-base-panx-dataset-ar](https://huggingface.co/tner/xlm-roberta-base-panx-dataset-ar) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1977
- F1: 0.8803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2179 | 1.0 | 188 | 0.1977 | 0.8803 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "tner/xlm-roberta-base-panx-dataset-ar", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-ar", "results": []}]} | saraataryy/xlm-roberta-base-finetuned-panx-ar | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:tner/xlm-roberta-base-panx-dataset-ar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:04:49+00:00 |
text-generation | transformers |
A Fishy Model
This model was trained on the ChatML format with 8k context.
# Uploaded model
- **Developed by:** TheTsar1209
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | TheTsar1209/llama3-carp-v0.3 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:05:29+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-rw-1b-code-generation-llm-task2-modelC
This model is a fine-tuned version of [petals-team/falcon-rw-1b](https://huggingface.co/petals-team/falcon-rw-1b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.626 | 0.0356 | 20 | 1.7087 |
| 1.9368 | 0.0712 | 40 | 1.6675 |
| 1.4542 | 0.1068 | 60 | 1.6467 |
| 1.2704 | 0.1423 | 80 | 1.6474 |
| 1.1888 | 0.1779 | 100 | 1.6618 |
| 0.9006 | 0.2135 | 120 | 1.6415 |
| 1.1376 | 0.2491 | 140 | 1.6583 |
| 0.9937 | 0.2847 | 160 | 1.6454 |
| 0.8624 | 0.3203 | 180 | 1.6594 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "petals-team/falcon-rw-1b", "model-index": [{"name": "falcon-rw-1b-code-generation-llm-task2-modelC", "results": []}]} | Katochh/falcon-rw-1b-code-generation-llm-task2-modelC | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:petals-team/falcon-rw-1b",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-26T22:08:38+00:00 |
null | null | {} | ChristianAzinn/snowflake-arctic-instruct-gguf-BROKEN | null | [
"region:us"
]
| null | 2024-04-26T22:09:08+00:00 |
|
null | null |
# CAI Llama3 Consciousness Model
## Overview:
The CAI Llama3 Consciousness Model is a specialized large language model designed to understand and respond to complex questions about consciousness. Trained on the "Consciousness Benchmark Dataset" containing 10,000 unique questions and responses, this model represents a significant step toward AI systems that can engage in meaningful discussions about consciousness, philosophy, neuroscience, and related fields.
## Training Data:
The model is fine-tuned on a dataset focused on consciousness studies, covering a broad spectrum of topics including philosophy, quantum consciousness, neuroscience, and the explanatory gap. The dataset contains detailed responses to questions exploring various aspects of consciousness, providing a rich foundation for training.
## Architecture:
This model is based on the Llama3 8b architecture, a state-of-the-art large language model known for its capability to understand and generate human-like text. The fine-tuning process has been tailored to enhance the model's understanding of consciousness-related topics, allowing it to provide insightful and coherent responses to complex questions.
## Applications:
The CAI Llama3 Consciousness Model can be applied to various use cases, such as:
## Research and Education:
Supporting researchers and educators in the field of consciousness studies by offering insights and detailed responses.
## Conversational AI:
Building conversational agents capable of discussing consciousness, philosophy, and related topics.
## Knowledge-based Systems:
Enhancing AI systems with knowledge in consciousness studies to provide more comprehensive answers and explanations.
## Performance and Features:
The model is designed to offer:
High-Quality Responses: Capable of generating coherent and relevant responses to a wide range of consciousness-related questions.
Contextual Understanding: Able to understand the context and nuances in complex discussions about consciousness.
Flexibility: Suitable for integration into various AI applications, from conversational agents to research tools.
## Licensing and Attribution:
Before using this dataset, ensure compliance with any licensing agreements or usage restrictions. If you share or redistribute the dataset, provide appropriate attribution to the source.
## Contact Information:
For additional information about the dataset or if you have questions, please contact [@innerinetco](https://x.com/innerinetco) | {"license": "llama3", "datasets": ["InnerI/CAI"]} | InnerI/CAI | null | [
"safetensors",
"dataset:InnerI/CAI",
"license:llama3",
"region:us"
]
| null | 2024-04-26T22:11:23+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-bert-finetuned-squadv2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilled-bert-finetuned-squadv2", "results": []}]} | momo345/distilled-bert-finetuned-squadv2 | null | [
"transformers",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:12:39+00:00 |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 𧨠diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | rubbrband/dynavisionXLAllInOneStylized_releaseV0610Bakedvae | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| null | 2024-04-26T22:12:51+00:00 |
reinforcement-learning | null |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-cartpole-01", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "500.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | stuvx/Reinforce-cartpole-01 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| null | 2024-04-26T22:13:10+00:00 |
null | null | {"license": "mit"} | nagug/varnam_llemma_7b.gguf | null | [
"gguf",
"license:mit",
"region:us"
]
| null | 2024-04-26T22:13:52+00:00 |
|
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"} | zubochenko/Plastic | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
]
| null | 2024-04-26T22:15:42+00:00 |
null | null |
# sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test-Q6_K-GGUF
This model was converted to GGUF format from [`sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test`](https://huggingface.co/sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test-Q6_K-GGUF --model hansoldeco-beomi-llama3-open-ko-8b-64k-test.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test-Q6_K-GGUF --model hansoldeco-beomi-llama3-open-ko-8b-64k-test.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m hansoldeco-beomi-llama3-open-ko-8b-64k-test.Q6_K.gguf -n 128
```
| {"license": "other", "tags": ["generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "beomi/Llama-3-Open-Ko-8B", "model-index": [{"name": "beomi-llama3-8b-64k", "results": []}]} | sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test-Q6_K-GGUF | null | [
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:beomi/Llama-3-Open-Ko-8B",
"license:other",
"region:us"
]
| null | 2024-04-26T22:16:28+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bigscience-small-testing - bnb 4bits
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bigscience-small-testing/
Original model description:
---
language:
- eng
tags:
- integration
pipeline_tag: text-generation
---
# BigScience - testing model
This model aims to test the conversion between Megatron-LM and transformers. It is a small ```GPT-2```-like model that has been used to debug the script. Use it only for integration tests
| {} | RichardErkhov/bigscience_-_bigscience-small-testing-4bits | null | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-26T22:17:46+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bigscience-small-testing - bnb 8bits
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bigscience-small-testing/
Original model description:
---
language:
- eng
tags:
- integration
pipeline_tag: text-generation
---
# BigScience - testing model
This model aims to test the conversion between Megatron-LM and transformers. It is a small ```GPT-2```-like model that has been used to debug the script. Use it only for integration tests
| {} | RichardErkhov/bigscience_-_bigscience-small-testing-8bits | null | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
]
| null | 2024-04-26T22:18:02+00:00 |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"} | ivillar/Enlighten_Instruct | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
]
| null | 2024-04-26T22:18:12+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "sft"]} | Mohamedshaaban2001/llama3 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-26T22:18:49+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | manoj-dhakal/llama-3-8b-PhiloSloppy | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:21:21+00:00 |
null | null | {"license": "openrail"} | Frixi/Boston_Italian_Male | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-26T22:21:41+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5185
- F1 Score: 0.7454
- Accuracy: 0.7452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.599 | 0.93 | 200 | 0.5564 | 0.7170 | 0.7182 |
| 0.5605 | 1.87 | 400 | 0.5432 | 0.7308 | 0.7305 |
| 0.5397 | 2.8 | 600 | 0.5353 | 0.7369 | 0.7372 |
| 0.5353 | 3.74 | 800 | 0.5292 | 0.7344 | 0.7349 |
| 0.531 | 4.67 | 1000 | 0.5277 | 0.7381 | 0.7378 |
| 0.5205 | 5.61 | 1200 | 0.5246 | 0.7411 | 0.7408 |
| 0.5215 | 6.54 | 1400 | 0.5258 | 0.7430 | 0.7428 |
| 0.5078 | 7.48 | 1600 | 0.5239 | 0.7463 | 0.7460 |
| 0.5193 | 8.41 | 1800 | 0.5190 | 0.7460 | 0.7457 |
| 0.5081 | 9.35 | 2000 | 0.5184 | 0.7460 | 0.7457 |
| 0.5053 | 10.28 | 2200 | 0.5193 | 0.7501 | 0.7499 |
| 0.5067 | 11.21 | 2400 | 0.5202 | 0.7465 | 0.7463 |
| 0.5003 | 12.15 | 2600 | 0.5292 | 0.7439 | 0.7443 |
| 0.5007 | 13.08 | 2800 | 0.5184 | 0.7497 | 0.7496 |
| 0.5008 | 14.02 | 3000 | 0.5148 | 0.7504 | 0.7501 |
| 0.4962 | 14.95 | 3200 | 0.5123 | 0.7545 | 0.7543 |
| 0.4916 | 15.89 | 3400 | 0.5205 | 0.7496 | 0.7496 |
| 0.4917 | 16.82 | 3600 | 0.5191 | 0.7486 | 0.7487 |
| 0.4927 | 17.76 | 3800 | 0.5300 | 0.7449 | 0.7455 |
| 0.4924 | 18.69 | 4000 | 0.5143 | 0.7516 | 0.7513 |
| 0.4872 | 19.63 | 4200 | 0.5176 | 0.7501 | 0.7501 |
| 0.4915 | 20.56 | 4400 | 0.5116 | 0.7522 | 0.7519 |
| 0.4856 | 21.5 | 4600 | 0.5190 | 0.7498 | 0.7501 |
| 0.4856 | 22.43 | 4800 | 0.5082 | 0.7509 | 0.7507 |
| 0.483 | 23.36 | 5000 | 0.5199 | 0.7540 | 0.7543 |
| 0.4833 | 24.3 | 5200 | 0.5087 | 0.7522 | 0.7519 |
| 0.4823 | 25.23 | 5400 | 0.5080 | 0.7525 | 0.7522 |
| 0.4826 | 26.17 | 5600 | 0.5115 | 0.7565 | 0.7563 |
| 0.4793 | 27.1 | 5800 | 0.5084 | 0.7583 | 0.7581 |
| 0.4753 | 28.04 | 6000 | 0.5103 | 0.7580 | 0.7578 |
| 0.4795 | 28.97 | 6200 | 0.5182 | 0.7546 | 0.7548 |
| 0.481 | 29.91 | 6400 | 0.5099 | 0.7563 | 0.7560 |
| 0.4786 | 30.84 | 6600 | 0.5196 | 0.7544 | 0.7548 |
| 0.4752 | 31.78 | 6800 | 0.5099 | 0.7574 | 0.7572 |
| 0.4743 | 32.71 | 7000 | 0.5098 | 0.7574 | 0.7572 |
| 0.4708 | 33.64 | 7200 | 0.5150 | 0.7524 | 0.7525 |
| 0.4747 | 34.58 | 7400 | 0.5112 | 0.7589 | 0.7587 |
| 0.4744 | 35.51 | 7600 | 0.5124 | 0.7534 | 0.7534 |
| 0.4692 | 36.45 | 7800 | 0.5114 | 0.7577 | 0.7575 |
| 0.4738 | 37.38 | 8000 | 0.5204 | 0.7509 | 0.7513 |
| 0.4689 | 38.32 | 8200 | 0.5135 | 0.7543 | 0.7543 |
| 0.4699 | 39.25 | 8400 | 0.5112 | 0.7550 | 0.7548 |
| 0.4727 | 40.19 | 8600 | 0.5111 | 0.7592 | 0.7589 |
| 0.4694 | 41.12 | 8800 | 0.5102 | 0.7556 | 0.7554 |
| 0.4697 | 42.06 | 9000 | 0.5115 | 0.7544 | 0.7543 |
| 0.47 | 42.99 | 9200 | 0.5163 | 0.7533 | 0.7534 |
| 0.4667 | 43.93 | 9400 | 0.5136 | 0.7552 | 0.7551 |
| 0.4685 | 44.86 | 9600 | 0.5118 | 0.7550 | 0.7548 |
| 0.4667 | 45.79 | 9800 | 0.5119 | 0.7544 | 0.7543 |
| 0.4696 | 46.73 | 10000 | 0.5127 | 0.7547 | 0.7545 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T22:23:27+00:00 |
null | null | {} | fonzieso/Rosa_Ushiromiya | null | [
"region:us"
]
| null | 2024-04-26T22:23:29+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5256
- F1 Score: 0.7445
- Accuracy: 0.7443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5831 | 0.93 | 200 | 0.5387 | 0.7370 | 0.7367 |
| 0.5368 | 1.87 | 400 | 0.5344 | 0.7379 | 0.7381 |
| 0.523 | 2.8 | 600 | 0.5245 | 0.7480 | 0.7478 |
| 0.5178 | 3.74 | 800 | 0.5188 | 0.7453 | 0.7455 |
| 0.5129 | 4.67 | 1000 | 0.5176 | 0.7487 | 0.7484 |
| 0.5003 | 5.61 | 1200 | 0.5172 | 0.7485 | 0.7484 |
| 0.5013 | 6.54 | 1400 | 0.5165 | 0.7468 | 0.7469 |
| 0.4858 | 7.48 | 1600 | 0.5154 | 0.7527 | 0.7528 |
| 0.4953 | 8.41 | 1800 | 0.5081 | 0.7602 | 0.7601 |
| 0.4826 | 9.35 | 2000 | 0.5038 | 0.7618 | 0.7616 |
| 0.4774 | 10.28 | 2200 | 0.5071 | 0.7607 | 0.7607 |
| 0.4788 | 11.21 | 2400 | 0.5108 | 0.7622 | 0.7625 |
| 0.4695 | 12.15 | 2600 | 0.5230 | 0.7623 | 0.7628 |
| 0.4673 | 13.08 | 2800 | 0.5081 | 0.7553 | 0.7551 |
| 0.4657 | 14.02 | 3000 | 0.5106 | 0.7680 | 0.7680 |
| 0.4619 | 14.95 | 3200 | 0.5067 | 0.7677 | 0.7674 |
| 0.4549 | 15.89 | 3400 | 0.5063 | 0.7677 | 0.7674 |
| 0.4549 | 16.82 | 3600 | 0.5193 | 0.7573 | 0.7575 |
| 0.4533 | 17.76 | 3800 | 0.5346 | 0.7471 | 0.7484 |
| 0.4506 | 18.69 | 4000 | 0.5126 | 0.7616 | 0.7613 |
| 0.4457 | 19.63 | 4200 | 0.5069 | 0.7625 | 0.7622 |
| 0.4477 | 20.56 | 4400 | 0.5066 | 0.7627 | 0.7625 |
| 0.4407 | 21.5 | 4600 | 0.5125 | 0.7624 | 0.7625 |
| 0.4395 | 22.43 | 4800 | 0.5104 | 0.7512 | 0.7510 |
| 0.4348 | 23.36 | 5000 | 0.5139 | 0.7618 | 0.7616 |
| 0.4358 | 24.3 | 5200 | 0.5075 | 0.7597 | 0.7595 |
| 0.4325 | 25.23 | 5400 | 0.5063 | 0.7574 | 0.7572 |
| 0.431 | 26.17 | 5600 | 0.5125 | 0.7583 | 0.7581 |
| 0.4259 | 27.1 | 5800 | 0.5188 | 0.7468 | 0.7466 |
| 0.4211 | 28.04 | 6000 | 0.5109 | 0.7551 | 0.7551 |
| 0.4238 | 28.97 | 6200 | 0.5197 | 0.7591 | 0.7589 |
| 0.423 | 29.91 | 6400 | 0.5212 | 0.7510 | 0.7507 |
| 0.4227 | 30.84 | 6600 | 0.5272 | 0.7543 | 0.7545 |
| 0.422 | 31.78 | 6800 | 0.5129 | 0.7543 | 0.7540 |
| 0.4145 | 32.71 | 7000 | 0.5217 | 0.7533 | 0.7531 |
| 0.4138 | 33.64 | 7200 | 0.5306 | 0.7500 | 0.7501 |
| 0.4151 | 34.58 | 7400 | 0.5224 | 0.7519 | 0.7516 |
| 0.4124 | 35.51 | 7600 | 0.5221 | 0.7454 | 0.7452 |
| 0.4071 | 36.45 | 7800 | 0.5242 | 0.7528 | 0.7525 |
| 0.4117 | 37.38 | 8000 | 0.5251 | 0.7523 | 0.7522 |
| 0.405 | 38.32 | 8200 | 0.5278 | 0.7518 | 0.7516 |
| 0.4068 | 39.25 | 8400 | 0.5228 | 0.7516 | 0.7513 |
| 0.4062 | 40.19 | 8600 | 0.5246 | 0.7554 | 0.7551 |
| 0.404 | 41.12 | 8800 | 0.5261 | 0.7525 | 0.7522 |
| 0.4029 | 42.06 | 9000 | 0.5234 | 0.7548 | 0.7545 |
| 0.3998 | 42.99 | 9200 | 0.5308 | 0.7483 | 0.7481 |
| 0.3993 | 43.93 | 9400 | 0.5279 | 0.7495 | 0.7493 |
| 0.4 | 44.86 | 9600 | 0.5292 | 0.7501 | 0.7499 |
| 0.3987 | 45.79 | 9800 | 0.5275 | 0.7516 | 0.7513 |
| 0.4019 | 46.73 | 10000 | 0.5278 | 0.7492 | 0.7490 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T22:23:32+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5206
- F1 Score: 0.7495
- Accuracy: 0.7493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5704 | 0.93 | 200 | 0.5291 | 0.7380 | 0.7378 |
| 0.5282 | 1.87 | 400 | 0.5289 | 0.7413 | 0.7419 |
| 0.5142 | 2.8 | 600 | 0.5150 | 0.7536 | 0.7534 |
| 0.505 | 3.74 | 800 | 0.5092 | 0.7588 | 0.7587 |
| 0.4973 | 4.67 | 1000 | 0.5087 | 0.7583 | 0.7581 |
| 0.4803 | 5.61 | 1200 | 0.5097 | 0.7623 | 0.7622 |
| 0.4798 | 6.54 | 1400 | 0.5004 | 0.7633 | 0.7630 |
| 0.4615 | 7.48 | 1600 | 0.5067 | 0.7680 | 0.7677 |
| 0.4646 | 8.41 | 1800 | 0.4996 | 0.7701 | 0.7698 |
| 0.4512 | 9.35 | 2000 | 0.5005 | 0.7642 | 0.7639 |
| 0.4406 | 10.28 | 2200 | 0.5116 | 0.7660 | 0.7657 |
| 0.4378 | 11.21 | 2400 | 0.5163 | 0.7638 | 0.7636 |
| 0.4252 | 12.15 | 2600 | 0.5131 | 0.7676 | 0.7674 |
| 0.4194 | 13.08 | 2800 | 0.5110 | 0.7607 | 0.7604 |
| 0.4133 | 14.02 | 3000 | 0.5230 | 0.7658 | 0.7657 |
| 0.4042 | 14.95 | 3200 | 0.5172 | 0.7657 | 0.7654 |
| 0.391 | 15.89 | 3400 | 0.5241 | 0.7680 | 0.7677 |
| 0.388 | 16.82 | 3600 | 0.5437 | 0.7529 | 0.7528 |
| 0.378 | 17.76 | 3800 | 0.5732 | 0.7431 | 0.7446 |
| 0.3728 | 18.69 | 4000 | 0.5386 | 0.7637 | 0.7636 |
| 0.367 | 19.63 | 4200 | 0.5414 | 0.7510 | 0.7507 |
| 0.3592 | 20.56 | 4400 | 0.5526 | 0.7551 | 0.7548 |
| 0.3505 | 21.5 | 4600 | 0.5667 | 0.7574 | 0.7578 |
| 0.344 | 22.43 | 4800 | 0.5569 | 0.7492 | 0.7490 |
| 0.3371 | 23.36 | 5000 | 0.5637 | 0.7560 | 0.7557 |
| 0.3293 | 24.3 | 5200 | 0.5626 | 0.7513 | 0.7510 |
| 0.3274 | 25.23 | 5400 | 0.5735 | 0.7421 | 0.7422 |
| 0.316 | 26.17 | 5600 | 0.5841 | 0.7554 | 0.7551 |
| 0.3147 | 27.1 | 5800 | 0.5948 | 0.7468 | 0.7472 |
| 0.3048 | 28.04 | 6000 | 0.5895 | 0.7531 | 0.7528 |
| 0.2992 | 28.97 | 6200 | 0.6038 | 0.7522 | 0.7519 |
| 0.2961 | 29.91 | 6400 | 0.5871 | 0.7507 | 0.7504 |
| 0.2974 | 30.84 | 6600 | 0.5992 | 0.7528 | 0.7525 |
| 0.2919 | 31.78 | 6800 | 0.5873 | 0.7553 | 0.7551 |
| 0.2827 | 32.71 | 7000 | 0.6055 | 0.7487 | 0.7487 |
| 0.2813 | 33.64 | 7200 | 0.6482 | 0.7421 | 0.7431 |
| 0.2786 | 34.58 | 7400 | 0.6069 | 0.7551 | 0.7548 |
| 0.2711 | 35.51 | 7600 | 0.6172 | 0.7472 | 0.7469 |
| 0.2633 | 36.45 | 7800 | 0.6349 | 0.7572 | 0.7569 |
| 0.2656 | 37.38 | 8000 | 0.6379 | 0.7544 | 0.7543 |
| 0.2616 | 38.32 | 8200 | 0.6498 | 0.7516 | 0.7513 |
| 0.2585 | 39.25 | 8400 | 0.6421 | 0.7516 | 0.7513 |
| 0.2562 | 40.19 | 8600 | 0.6397 | 0.7493 | 0.7490 |
| 0.254 | 41.12 | 8800 | 0.6547 | 0.7545 | 0.7543 |
| 0.2519 | 42.06 | 9000 | 0.6470 | 0.7545 | 0.7543 |
| 0.2479 | 42.99 | 9200 | 0.6571 | 0.7498 | 0.7496 |
| 0.2471 | 43.93 | 9400 | 0.6546 | 0.7424 | 0.7422 |
| 0.246 | 44.86 | 9600 | 0.6531 | 0.7468 | 0.7466 |
| 0.2457 | 45.79 | 9800 | 0.6563 | 0.7490 | 0.7487 |
| 0.2374 | 46.73 | 10000 | 0.6618 | 0.7492 | 0.7490 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T22:23:42+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloom-560m - bnb 4bits
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bloom-560m/
Original model description:
---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
# Model Card for Bloom-560m
<!-- Provide a quick summary of what the model is/does. -->
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Recommendations](#recommendations)
5. [Training Data](#training-data)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#techincal-specifications)
9. [Citation](#citation)
10. [Glossary and Calculations](#glossary-and-calculations)
11. [More Information](#more-information)
12. [Model Card Authors](#model-card-authors)
13. [Model Card Contact](#model-card-contact)
## Model Details
### Model Description
*This section provides information for anyone who wants to know about the model.*
- **Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
- **Model Type:** Transformer-based Language Model
- **Version:** 1.0.0
- **Languages:** Multiple; see [training data](#training-data)
- **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
- **Release Date Estimate:** Monday, 11.July.2022
- **Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.Β The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
## Bias, Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
### Recommendations
*This section provides information on warnings and potential mitigations.*
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

**The following table shows the further distribution of Niger-Congo and Indic languages in the training data.**
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
**The following table shows the distribution of programming languages.**
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
## Evaluation
*This section describes the evaluation protocols and provides the results.*
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.0
- Validation Loss: 2.2
- Perplexity: 8.9
(More evaluation scores forthcoming at the end of model training.)
## Environmental Impact
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
## Technical Specifications
*This section provides information for people who work on model development.*
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 559,214,592 parameters:
* 256,901,120 embedding parameters
* 24 layers, 16 attention heads
* Hidden layers are 1024-dimensional
* Sequence length of 2048 tokens (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
### **Training**
Training logs: [Tensorboard link](https://huggingface.co/bigscience/tr11e-350M-logs)
- Training throughput: About 150 TFLOPs per GPU
- Number of epochs: 1 (*current target*)
- Dates:
- Started 11th March, 2022 11:42am PST
- Ended 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)
- Server training location: Γle-de-France, France
### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
## Citation
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
## More Information
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos MuΓ±oz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana IliΔ, GΓ©rard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
## Model Card Contact
**Send Questions to:** [email protected]
| {} | RichardErkhov/bigscience_-_bloom-560m-4bits | null | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-26T22:25:26+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "aaditya/OpenBioLLM-Llama3-8B", "quantized_by": "mradermacher"} | mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF | null | [
"transformers",
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"base_model:aaditya/OpenBioLLM-Llama3-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:25:50+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloom-560m - bnb 8bits
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bloom-560m/
Original model description:
---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
# Model Card for Bloom-560m
<!-- Provide a quick summary of what the model is/does. -->
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Recommendations](#recommendations)
5. [Training Data](#training-data)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#techincal-specifications)
9. [Citation](#citation)
10. [Glossary and Calculations](#glossary-and-calculations)
11. [More Information](#more-information)
12. [Model Card Authors](#model-card-authors)
13. [Model Card Contact](#model-card-contact)
## Model Details
### Model Description
*This section provides information for anyone who wants to know about the model.*
- **Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
- **Model Type:** Transformer-based Language Model
- **Version:** 1.0.0
- **Languages:** Multiple; see [training data](#training-data)
- **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
- **Release Date Estimate:** Monday, 11.July.2022
- **Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.Β The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
## Bias, Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
### Recommendations
*This section provides information on warnings and potential mitigations.*
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

**The following table shows the further distribution of Niger-Congo and Indic languages in the training data.**
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
**The following table shows the distribution of programming languages.**
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
## Evaluation
*This section describes the evaluation protocols and provides the results.*
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.0
- Validation Loss: 2.2
- Perplexity: 8.9
(More evaluation scores forthcoming at the end of model training.)
## Environmental Impact
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
## Technical Specifications
*This section provides information for people who work on model development.*
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 559,214,592 parameters:
* 256,901,120 embedding parameters
* 24 layers, 16 attention heads
* Hidden layers are 1024-dimensional
* Sequence length of 2048 tokens (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
### **Training**
Training logs: [Tensorboard link](https://huggingface.co/bigscience/tr11e-350M-logs)
- Training throughput: About 150 TFLOPs per GPU
- Number of epochs: 1 (*current target*)
- Dates:
- Started 11th March, 2022 11:42am PST
- Ended 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)
- Server training location: Γle-de-France, France
### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
## Citation
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
## More Information
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos MuΓ±oz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana IliΔ, GΓ©rard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
## Model Card Contact
**Send Questions to:** [email protected]
| {} | RichardErkhov/bigscience_-_bloom-560m-8bits | null | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
]
| null | 2024-04-26T22:26:26+00:00 |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [vaatsav06/Llama3_medqa_final](https://huggingface.co/vaatsav06/Llama3_medqa_final)
* [o2satz/L3_med16](https://huggingface.co/o2satz/L3_med16)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: o2satz/L3_med16
layer_range:
- 0
- 32
- model: vaatsav06/Llama3_medqa_final
layer_range:
- 0
- 32
merge_method: slerp
base_model: vaatsav06/Llama3_medqa_final
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["vaatsav06/Llama3_medqa_final", "o2satz/L3_med16"]} | o2satz/L3_med16QA | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:vaatsav06/Llama3_medqa_final",
"base_model:o2satz/L3_med16",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
]
| null | 2024-04-26T22:27:32+00:00 |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloom-560m - GGUF
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bloom-560m/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [bloom-560m.Q2_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q2_K.gguf) | Q2_K | 0.39GB |
| [bloom-560m.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.IQ3_XS.gguf) | IQ3_XS | 0.43GB |
| [bloom-560m.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.IQ3_S.gguf) | IQ3_S | 0.43GB |
| [bloom-560m.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q3_K_S.gguf) | Q3_K_S | 0.43GB |
| [bloom-560m.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.IQ3_M.gguf) | IQ3_M | 0.45GB |
| [bloom-560m.Q3_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q3_K.gguf) | Q3_K | 0.46GB |
| [bloom-560m.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q3_K_M.gguf) | Q3_K_M | 0.46GB |
| [bloom-560m.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q3_K_L.gguf) | Q3_K_L | 0.47GB |
| [bloom-560m.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.IQ4_XS.gguf) | IQ4_XS | 0.49GB |
| [bloom-560m.Q4_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q4_0.gguf) | Q4_0 | 0.5GB |
| [bloom-560m.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.IQ4_NL.gguf) | IQ4_NL | 0.5GB |
| [bloom-560m.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q4_K_S.gguf) | Q4_K_S | 0.5GB |
| [bloom-560m.Q4_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q4_K.gguf) | Q4_K | 0.52GB |
| [bloom-560m.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q4_K_M.gguf) | Q4_K_M | 0.52GB |
| [bloom-560m.Q4_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q4_1.gguf) | Q4_1 | 0.53GB |
| [bloom-560m.Q5_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q5_0.gguf) | Q5_0 | 0.57GB |
| [bloom-560m.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q5_K_S.gguf) | Q5_K_S | 0.57GB |
| [bloom-560m.Q5_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q5_K.gguf) | Q5_K | 0.58GB |
| [bloom-560m.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q5_K_M.gguf) | Q5_K_M | 0.58GB |
| [bloom-560m.Q5_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q5_1.gguf) | Q5_1 | 0.6GB |
| [bloom-560m.Q6_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q6_K.gguf) | Q6_K | 0.64GB |
Original model description:
---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
# Model Card for Bloom-560m
<!-- Provide a quick summary of what the model is/does. -->
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Recommendations](#recommendations)
5. [Training Data](#training-data)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#techincal-specifications)
9. [Citation](#citation)
10. [Glossary and Calculations](#glossary-and-calculations)
11. [More Information](#more-information)
12. [Model Card Authors](#model-card-authors)
13. [Model Card Contact](#model-card-contact)
## Model Details
### Model Description
*This section provides information for anyone who wants to know about the model.*
- **Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
- **Model Type:** Transformer-based Language Model
- **Version:** 1.0.0
- **Languages:** Multiple; see [training data](#training-data)
- **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
- **Release Date Estimate:** Monday, 11.July.2022
- **Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.Β The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
## Bias, Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
### Recommendations
*This section provides information on warnings and potential mitigations.*
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

**The following table shows the further distribution of Niger-Congo and Indic languages in the training data.**
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
**The following table shows the distribution of programming languages.**
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
## Evaluation
*This section describes the evaluation protocols and provides the results.*
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.0
- Validation Loss: 2.2
- Perplexity: 8.9
(More evaluation scores forthcoming at the end of model training.)
## Environmental Impact
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
## Technical Specifications
*This section provides information for people who work on model development.*
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 559,214,592 parameters:
* 256,901,120 embedding parameters
* 24 layers, 16 attention heads
* Hidden layers are 1024-dimensional
* Sequence length of 2048 tokens (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
### **Training**
Training logs: [Tensorboard link](https://huggingface.co/bigscience/tr11e-350M-logs)
- Training throughput: About 150 TFLOPs per GPU
- Number of epochs: 1 (*current target*)
- Dates:
- Started 11th March, 2022 11:42am PST
- Ended 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)
- Server training location: Γle-de-France, France
### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
## Citation
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
## More Information
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos MuΓ±oz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana IliΔ, GΓ©rard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
## Model Card Contact
**Send Questions to:** [email protected]
| {} | RichardErkhov/bigscience_-_bloom-560m-gguf | null | [
"gguf",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"region:us"
]
| null | 2024-04-26T22:27:47+00:00 |
text-classification | transformers |
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 1.0682125091552734
f1_macro: 0.17334740509080254
f1_micro: 0.677115987460815
f1_weighted: 0.6445509958774286
precision_macro: 0.20939709069152226
precision_micro: 0.677115987460815
precision_weighted: 0.6403148729034246
recall_macro: 0.17545960483239412
recall_micro: 0.677115987460815
recall_weighted: 0.677115987460815
accuracy: 0.677115987460815
| {"tags": ["autotrain", "text-classification"], "datasets": ["autotrain-produto-google-bert-base-uncased-9099/autotrain-data"], "widget": [{"text": "I love AutoTrain"}]} | KassioLima/autotrain-produto-google-bert-base-uncased-9099 | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:autotrain-produto-google-bert-base-uncased-9099/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:27:56+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Lakoc/ebranchformer_6_128h_for_pretraining | null | [
"transformers",
"wav2vec2-ebranchformer",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:29:09+00:00 |
text-generation | transformers | # TenyxChat: Language Model Alignment using Tenyx Fine-tuning
Introducing Llama-3-TenyxChat-70B, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's advanced fine-tuning technology ([VentureBeat article](https://venturebeat.com/ai/tenyx-aims-to-fix-llms-catastrophic-forgetting-problem/)). Our model is trained using the [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) framework on the open-source AI feedback dataset [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
We fine-tune [Llama3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) with our proprietary approach
which shows an increase in [MT-Bench](https://arxiv.org/abs/2306.05685)*, without a drop in performance of the model on other benchmarks.
Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner,
thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution.
Llama-3-TenyxChat-70B was trained using eight A100s (80GB) for fifteen hours, with a training setup obtained from HuggingFaceH4 ([GitHub](https://github.com/huggingface/alignment-handbook)).
*The MT-Bench evaluation we perform follows the latest eval upgrade as PR'd [here](https://github.com/lm-sys/FastChat/pull/3158). This PR upgrades the evaluation from `GPT-4-0613` to `GPT-4-preview-0125` (latest version) as well as corrects and improves the quality of the reference answers for a subset of questions. These changes are required to correct the erroneous rating during previous evaluation.
**Model Developers** [Tenyx Research](https://www.tenyx.com/research)
# Model details
- Model type: Fine-tuned 70B Instruct model for chat.
- License: Meta Llama 3 Community License
- Base model: [Llama3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
- Demo: [HuggingFace Space](https://huggingface.co/spaces/tenyx/Llama3-TenyxChat-70B)
## Usage
Our model uses the same chat template as [Llama3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
### Hugging face Example
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="tenyx/Llama3-TenyxChat-70B", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
{"role": "user", "content": "Hi. I would like to make a hotel booking."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=512, do_sample=False)
```
# Performance
At the time of release (April 2024), Llama3-TenyxChat-70B is the highest-ranked open source model on the MT-Bench evaluation available for download.
## MT-Bench
MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using `GPT-4-preview-0125` on a scale of 1 to 10, with higher values corresponding to better responses.
| Model-name | GPT4-preview-0125 MT Bench | Chat Arena Elo |
|--------------------------------|----------------------------|----------------|
| GPT-4-1106 | 8.79 | 1251 |
| Claude 3 Opus (20240229) | 8.57 | 1247 |
| **Llama3-TenyxChat-70B** |**8.15** | NA |
| *Llama3-70B-Instruct* | 7.96 | 1207 |
| Claude 3 Sonnet (20240229) | 7.82 | 1190 |
| GPT-4-0314 | 7.96 | 1185 |
| Mixtral | 7.38 | 1114 |
| gpt-3.5-turbo-0613 | 7.37 | 1113 |
| Yi-34B | 6.46 | 1099 |
| gpt-3.5-turbo-0125 | 7.52 | 1096 |
| Llama 2 70B | 6.01 | 1082 |
| NV-Llama2-70B-SteerLM-Chat | 6.57 | 1076 |

## Arena Hard
Arena-Hard is an evaluation tool for instruction-tuned LLMs containing 500 challenging user queries. They prompt GPT-4-1106-preview as judge to compare the models' responses against a baseline model (default: GPT-4-0314).
| Model-name | Score | |
|--------------------------------|--------|---------------------|
| gpt-4-0125-preview | 78.0 | 95% CI: (-1.8, 2.2) |
| claude-3-opus-20240229 | 60.4 | 95% CI: (-2.6, 2.1) |
| gpt-4-0314 | 50.0 | 95% CI: (0.0, 0.0) |
| **tenyx/Llama3-TenyxChat-70B** | **49.0** | 95% CI: (-3.0, 2.4) |
| *meta-llama/Meta-Llama-3-70B-In* | 47.3 | 95% CI: (-1.7, 2.6) |
| claude-3-sonnet-20240229 | 46.8 | 95% CI: (-2.7, 2.3) |
| claude-3-haiku-20240307 | 41.5 | 95% CI: (-2.4, 2.5) |
| gpt-4-0613 | 37.9 | 95% CI: (-2.1, 2.2) |
| mistral-large-2402 | 37.7 | 95% CI: (-2.9, 2.8) |
| Qwen1.5-72B-Chat | 36.1 | 95% CI: (-2.1, 2.4) |
| command-r-plus | 33.1 | 95% CI: (-2.0, 1.9) |
## Open LLM Leaderboard Evaluation
We now present our results on the [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) used for benchmarking Open LLM Leaderboard on Hugging Face.
The task involves evaluation on `6` key benchmarks across reasoning and knowledge with different *few-shot* settings. Read more details about the benchmark at [the leaderboard page](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model-name | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| **Llama3-TenyxChat-70B** | **79.43** | 72.53 | 86.11 | 79.95 | 62.93 | 83.82 | 91.21 |
| *Llama3-70B-Instruct* | 77.88 | 71.42 | 85.69 | 80.06 | 61.81 | 82.87 | 85.44 |
*The results reported are from local evaluation of our model. `tenyx/Llama3-TenyxChat-70B` is submitted and will be reflected in the leaderboard once evaluation succeeds.
# Limitations
Llama3-TenyxChat-70B, like other language models, has its own set of limitations. We havenβt fine-tuned the model explicitly to align with **human** safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.
# License
Llama3-TenyxChat-70B is distributed under the Meta Llama 3 Community License.
# Citation
If you use Llama3-TenyxChat-70B for your research, cite us as
```
@misc{tenyxchat2024,
title={TenyxChat: Language Model Alignment using Tenyx Fine-tuning},
author={Tenyx},
year={2024},
}
``` | {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["tenyx-fine-tuning", "dpo", "tenyxchat", "llama3"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "pipeline_tag": "text-generation"} | tenyx/Llama3-TenyxChat-70B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"llama3",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2306.05685",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space"
]
| null | 2024-04-26T22:31:07+00:00 |
text-generation | transformers |
# mlx-community/Swallow-70b-instruct-v0.1-4bit
This model was converted to MLX format from [`tokyotech-llm/Swallow-70b-instruct-v0.1`]() using mlx-lm version **0.6.0**.
Refer to the [original model card](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v0.1) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Swallow-70b-instruct-v0.1-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"language": ["en", "ja"], "license": "llama2", "library_name": "transformers", "tags": ["mlx"], "pipeline_tag": "text-generation", "model_type": "llama"} | mlx-community/Swallow-70b-instruct-v0.1-4bit | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"conversational",
"en",
"ja",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:31:31+00:00 |
null | null | {"license": "openrail"} | erenvandervoort/Naya | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-26T22:31:35+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_OSAPL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_OSAPL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_Instruction0_OSAPL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:32:09+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-qlora
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-qlora](https://huggingface.co/alignment-handbook/zephyr-7b-sft-qlora) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5018
- Rewards/chosen: -2.1482
- Rewards/rejected: -3.1540
- Rewards/accuracies: 0.7590
- Rewards/margins: 1.0058
- Logps/rejected: -556.6644
- Logps/chosen: -480.1277
- Logits/rejected: -1.2931
- Logits/chosen: -1.3827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6635 | 0.0523 | 100 | 0.6640 | 0.0129 | -0.0631 | 0.6830 | 0.0761 | -247.5831 | -264.0175 | -2.0469 | -2.1424 |
| 0.6119 | 0.1047 | 200 | 0.6207 | -0.5556 | -0.8212 | 0.6790 | 0.2657 | -323.3911 | -320.8676 | -1.9524 | -2.0449 |
| 0.5874 | 0.1570 | 300 | 0.5849 | -0.4240 | -0.8044 | 0.7000 | 0.3804 | -321.7128 | -307.7115 | -1.9609 | -2.0494 |
| 0.5608 | 0.2094 | 400 | 0.5607 | -1.1817 | -1.7752 | 0.7290 | 0.5935 | -418.7894 | -383.4811 | -1.6969 | -1.7823 |
| 0.5287 | 0.2617 | 500 | 0.5434 | -1.7248 | -2.4550 | 0.7250 | 0.7303 | -486.7726 | -437.7878 | -1.5394 | -1.6284 |
| 0.5504 | 0.3141 | 600 | 0.5278 | -1.3541 | -2.1302 | 0.7370 | 0.7761 | -454.2872 | -400.7156 | -1.4439 | -1.5287 |
| 0.5243 | 0.3664 | 700 | 0.5278 | -0.9934 | -1.7415 | 0.7420 | 0.7481 | -415.4179 | -364.6462 | -1.4888 | -1.5754 |
| 0.5346 | 0.4187 | 800 | 0.5285 | -1.0509 | -1.8191 | 0.7360 | 0.7681 | -423.1764 | -370.4044 | -1.4861 | -1.5718 |
| 0.5072 | 0.4711 | 900 | 0.5197 | -1.6324 | -2.5736 | 0.7300 | 0.9412 | -498.6239 | -428.5474 | -1.3651 | -1.4531 |
| 0.5023 | 0.5234 | 1000 | 0.5158 | -1.6927 | -2.6755 | 0.7460 | 0.9828 | -508.8179 | -434.5808 | -1.2853 | -1.3779 |
| 0.4954 | 0.5758 | 1100 | 0.5126 | -1.4605 | -2.3370 | 0.7480 | 0.8765 | -474.9688 | -411.3603 | -1.3921 | -1.4843 |
| 0.4983 | 0.6281 | 1200 | 0.5105 | -2.0566 | -3.0678 | 0.7450 | 1.0112 | -548.0505 | -470.9687 | -1.1942 | -1.2848 |
| 0.4774 | 0.6805 | 1300 | 0.5093 | -1.9802 | -3.0112 | 0.7510 | 1.0311 | -542.3931 | -463.3254 | -1.2574 | -1.3491 |
| 0.4516 | 0.7328 | 1400 | 0.5058 | -2.1539 | -3.2003 | 0.7530 | 1.0464 | -561.2969 | -480.7002 | -1.2592 | -1.3500 |
| 0.4758 | 0.7851 | 1500 | 0.5018 | -2.2342 | -3.2427 | 0.7550 | 1.0085 | -565.5339 | -488.7257 | -1.2803 | -1.3710 |
| 0.4967 | 0.8375 | 1600 | 0.5019 | -2.1690 | -3.1744 | 0.7590 | 1.0054 | -558.7111 | -482.2090 | -1.2939 | -1.3837 |
| 0.4769 | 0.8898 | 1700 | 0.5018 | -2.1431 | -3.1460 | 0.7600 | 1.0029 | -555.8691 | -479.6245 | -1.2936 | -1.3834 |
| 0.4843 | 0.9422 | 1800 | 0.5019 | -2.1475 | -3.1534 | 0.7580 | 1.0059 | -556.6094 | -480.0620 | -1.2932 | -1.3829 |
| 0.5048 | 0.9945 | 1900 | 0.5019 | -2.1484 | -3.1540 | 0.7590 | 1.0056 | -556.6639 | -480.1491 | -1.2933 | -1.3829 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "zephyr-7b-dpo-qlora", "results": []}]} | chrlu/zephyr-7b-dpo-qlora | null | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"4-bit",
"region:us"
]
| null | 2024-04-26T22:32:54+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | devesh220897/financial-chatbot-for-young-adults | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-26T22:33:18+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_PSOAL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_PSOAL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_Instruction0_PSOAL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:35:05+00:00 |
null | null | {} | SonicInGug/Woody-Tom-Hanks | null | [
"region:us"
]
| null | 2024-04-26T22:35:37+00:00 |
|
text-generation | transformers |
# Keiana-L3-Test6-8B-16
Keiana-L3-Test6-8B-16 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
# Keep in mind that, this merged model isn't usually tested at the moment, which could benefit in vocabulary error.
* [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)
* [Kaoeiri/Keiana-L3-Test4.7-8B-3](https://huggingface.co/Kaoeiri/Keiana-L3-Test4.7-8B-3)
## π§© Configuration
```yaml
merge_method: model_stock
dtype: float16
base_model: Kaoeiri/Keiana-L3-Test5.75-8B-13.5
models:
- model: Sao10K/L3-Solana-8B-v1
parameters:
weight: .2725
density: .385
- model: Kaoeiri/Keiana-L3-Test4.7-8B-3
parameters:
weight: .2
density: .56
parameters:
int8_mask: true
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kaoeiri/Keiana-L3-Test6-8B-16"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "Sao10K/L3-Solana-8B-v1", "Kaoeiri/Keiana-L3-Test4.7-8B-3"], "base_model": ["Sao10K/L3-Solana-8B-v1", "Kaoeiri/Keiana-L3-Test4.7-8B-3"]} | Kaoeiri/Keiana-L3-Test6-8B-16 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Sao10K/L3-Solana-8B-v1",
"Kaoeiri/Keiana-L3-Test4.7-8B-3",
"conversational",
"base_model:Sao10K/L3-Solana-8B-v1",
"base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:35:55+00:00 |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b)
* [o2satz/L3_med16QA](https://huggingface.co/o2satz/L3_med16QA)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: cognitivecomputations/dolphin-2.9-llama3-8b
layer_range:
- 0
- 32
- model: o2satz/L3_med16QA
layer_range:
- 0
- 32
merge_method: slerp
base_model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["cognitivecomputations/dolphin-2.9-llama3-8b", "o2satz/L3_med16QA"]} | o2satz/WS_med_QA_Dolphin | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:cognitivecomputations/dolphin-2.9-llama3-8b",
"base_model:o2satz/L3_med16QA",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:36:35+00:00 |
null | null | {} | Daguigonz/repo_name | null | [
"region:us"
]
| null | 2024-04-26T22:36:50+00:00 |
|
reinforcement-learning | ml-agents |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nvasko/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
| {"library_name": "ml-agents", "tags": ["SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos"]} | nvasko/poca-SoccerTwos | null | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| null | 2024-04-26T22:37:31+00:00 |
null | null | {} | Daguigonz/Giru | null | [
"region:us"
]
| null | 2024-04-26T22:38:43+00:00 |
|
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-brain-xray
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sartajbhuvaji/Brain-Tumor-Classification dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9079
- Accuracy: 0.6904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.2478 | 0.5556 | 100 | 0.9079 | 0.6904 |
| 0.1499 | 1.1111 | 200 | 1.1543 | 0.7183 |
| 0.0872 | 1.6667 | 300 | 1.1469 | 0.7614 |
| 0.0118 | 2.2222 | 400 | 1.2361 | 0.7259 |
| 0.0077 | 2.7778 | 500 | 1.2023 | 0.7665 |
| 0.0057 | 3.3333 | 600 | 1.2470 | 0.7640 |
| 0.0053 | 3.8889 | 700 | 1.2096 | 0.7766 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "vit-base-brain-xray", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "sartajbhuvaji/Brain-Tumor-Classification", "type": "imagefolder", "config": "default", "split": "Testing", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6903553299492385, "name": "Accuracy"}]}]}]} | abdulelahagr/vit-base-brain-xray | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:39:07+00:00 |
text-generation | transformers |
# Uploaded model
- **Developed by:** mahiatlinux
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | mahiatlinux/MasherAI-7B-v6.2-test2 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:39:31+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4153
- F1 Score: 0.8196
- Accuracy: 0.8197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4799 | 1.1 | 200 | 0.4446 | 0.8177 | 0.8176 |
| 0.4462 | 2.21 | 400 | 0.4431 | 0.8016 | 0.8037 |
| 0.4375 | 3.31 | 600 | 0.4297 | 0.8090 | 0.8103 |
| 0.4268 | 4.42 | 800 | 0.4235 | 0.8155 | 0.8166 |
| 0.4258 | 5.52 | 1000 | 0.4296 | 0.8065 | 0.8086 |
| 0.4183 | 6.63 | 1200 | 0.4304 | 0.8095 | 0.8114 |
| 0.4204 | 7.73 | 1400 | 0.4159 | 0.8174 | 0.8183 |
| 0.4136 | 8.84 | 1600 | 0.4270 | 0.8088 | 0.8107 |
| 0.4116 | 9.94 | 1800 | 0.4122 | 0.8254 | 0.8252 |
| 0.4059 | 11.05 | 2000 | 0.4211 | 0.8145 | 0.8155 |
| 0.4079 | 12.15 | 2200 | 0.4105 | 0.8251 | 0.8252 |
| 0.3999 | 13.26 | 2400 | 0.4121 | 0.8215 | 0.8221 |
| 0.4004 | 14.36 | 2600 | 0.4111 | 0.8208 | 0.8211 |
| 0.4 | 15.47 | 2800 | 0.4195 | 0.8135 | 0.8148 |
| 0.3957 | 16.57 | 3000 | 0.4134 | 0.8212 | 0.8211 |
| 0.3955 | 17.68 | 3200 | 0.4111 | 0.8225 | 0.8228 |
| 0.3926 | 18.78 | 3400 | 0.4149 | 0.8223 | 0.8228 |
| 0.3896 | 19.89 | 3600 | 0.4149 | 0.8216 | 0.8221 |
| 0.3921 | 20.99 | 3800 | 0.4159 | 0.8204 | 0.8211 |
| 0.3895 | 22.1 | 4000 | 0.4121 | 0.8194 | 0.8200 |
| 0.3857 | 23.2 | 4200 | 0.4133 | 0.8213 | 0.8218 |
| 0.3866 | 24.31 | 4400 | 0.4180 | 0.8206 | 0.8214 |
| 0.3797 | 25.41 | 4600 | 0.4145 | 0.8245 | 0.8249 |
| 0.385 | 26.52 | 4800 | 0.4160 | 0.8230 | 0.8235 |
| 0.384 | 27.62 | 5000 | 0.4144 | 0.8237 | 0.8242 |
| 0.3796 | 28.73 | 5200 | 0.4158 | 0.8176 | 0.8187 |
| 0.3772 | 29.83 | 5400 | 0.4124 | 0.8267 | 0.8270 |
| 0.3771 | 30.94 | 5600 | 0.4157 | 0.8279 | 0.8280 |
| 0.377 | 32.04 | 5800 | 0.4146 | 0.8297 | 0.8298 |
| 0.3779 | 33.15 | 6000 | 0.4135 | 0.8277 | 0.8280 |
| 0.3741 | 34.25 | 6200 | 0.4180 | 0.8259 | 0.8263 |
| 0.3733 | 35.36 | 6400 | 0.4232 | 0.8240 | 0.8245 |
| 0.3751 | 36.46 | 6600 | 0.4161 | 0.8254 | 0.8256 |
| 0.3729 | 37.57 | 6800 | 0.4187 | 0.8231 | 0.8239 |
| 0.3732 | 38.67 | 7000 | 0.4192 | 0.8252 | 0.8256 |
| 0.369 | 39.78 | 7200 | 0.4170 | 0.8283 | 0.8287 |
| 0.3711 | 40.88 | 7400 | 0.4170 | 0.8252 | 0.8256 |
| 0.3687 | 41.99 | 7600 | 0.4171 | 0.8256 | 0.8259 |
| 0.3679 | 43.09 | 7800 | 0.4220 | 0.8237 | 0.8242 |
| 0.3711 | 44.2 | 8000 | 0.4207 | 0.8236 | 0.8242 |
| 0.3666 | 45.3 | 8200 | 0.4185 | 0.8277 | 0.8280 |
| 0.3659 | 46.41 | 8400 | 0.4203 | 0.8268 | 0.8270 |
| 0.3707 | 47.51 | 8600 | 0.4211 | 0.8247 | 0.8252 |
| 0.3634 | 48.62 | 8800 | 0.4217 | 0.8257 | 0.8263 |
| 0.3632 | 49.72 | 9000 | 0.4220 | 0.8263 | 0.8266 |
| 0.367 | 50.83 | 9200 | 0.4223 | 0.8242 | 0.8249 |
| 0.3636 | 51.93 | 9400 | 0.4223 | 0.8247 | 0.8252 |
| 0.3653 | 53.04 | 9600 | 0.4199 | 0.8277 | 0.8280 |
| 0.3633 | 54.14 | 9800 | 0.4204 | 0.8277 | 0.8280 |
| 0.3624 | 55.25 | 10000 | 0.4216 | 0.8262 | 0.8266 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T22:40:42+00:00 |
null | null |
# Dolphin 2.9 Llama 3 70b π¬ (GGUF)
Quantized and converted to GGUF for Ollama.
## Example Modelfile
```
FROM ./dolphin-2.9-llama3-70b-Q6_K.gguf
TEMPLATE """{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ .Response }}<|im_end|>
"""
SYSTEM """You are Dolphin, a helpful AI assistant.
"""
PARAMETER num_ctx 8192
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
```
## Provided files
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [dolphin-2.9-llama3-70b-Q6_K.gguf-part00000](https://huggingface.co/TrabEsrever/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q6_K.gguf-part00000) [dolphin-2.9-llama3-70b-Q6_K.gguf-part00001](https://huggingface.co/TrabEsrever/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q6_K.gguf-part00001) | Q6_K | 54G |
| [dolphin-2.9-llama3-70b-Q8_0.gguf-part00000](https://huggingface.co/TrabEsrever/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q8_0.gguf-part00000) [dolphin-2.9-llama3-70b-Q8_0.gguf-part00001](https://huggingface.co/TrabEsrever/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q8_0.gguf-part00001) | Q8_0 | 70G |
## Combine multi part files into one GGUF
```
cat dolphin-2.9-llama3-70b-Q6_K.gguf-part00000 dolphin-2.9-llama3-70b-Q6_K.gguf-part00001 > dolphin-2.9-llama3-70b-Q6_K.gguf
```
| {"license": "llama3", "tags": ["gguf", "llama3", "ollama", "dolphin"], "base_model": "cognitivecomputations/dolphin-2.9-llama3-70b"} | TrabEsrever/dolphin-2.9-llama3-70b-GGUF | null | [
"gguf",
"llama3",
"ollama",
"dolphin",
"base_model:cognitivecomputations/dolphin-2.9-llama3-70b",
"license:llama3",
"region:us"
]
| null | 2024-04-26T22:41:00+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloom-1b1 - bnb 4bits
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bloom-1b1/
Original model description:
---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Data](#training-data)
4. [Risks and Limitations](#risks-and-limitations)
5. [Evaluation](#evaluation)
6. [Recommendations](#recommendations)
7. [Glossary and Calculations](#glossary-and-calculations)
8. [More Information](#more-information)
9. [Model Card Authors](#model-card-authors)
## Model Details
### Basics
*This section provides information for anyone who wants to know about the model.*
<details>
<summary>Click to expand</summary> <br/>
**Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
**Model Type:** Transformer-based Language Model
**Version:** 1.0.0
**Languages:** Multiple; see [training data](#training-data)
**License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
**Release Date Estimate:** Monday, 11.July.2022
**Send Questions to:** [email protected]
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
**Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
</details>
### Technical Specifications
*This section provides information for people who work on model development.*
<details>
<summary>Click to expand</summary><br/>
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 1,065,314,304 parameters:
* 385,351,680 embedding parameters
* 24 layers, 16 attention heads
* Hidden layers are 1536-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
#### **Training**
Training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11d-760M-logs)
- Number of epochs: 1
- Dates:
- Started 11th March, 2022 11:42am PST
- Ended 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)
- Server training location: Γle-de-France, France
#### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
</details>
### Environmental Impact
<details>
<summary>Click to expand</summary><br/>
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
</details>
<p> </p>
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
<details>
<summary>Click to expand</summary><br/>
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.Β The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
</details>
<p> </p>
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
<details>
<summary>Click to expand</summary><br/>
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

The following table shows the further distribution of Niger-Congo and Indic languages in the training data.
<details>
<summary>Click to expand</summary><br/>
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
The following table shows the distribution of programming languages.
<details>
<summary>Click to expand</summary><br/>
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
</details>
</details>
<p> </p>
## Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
<details>
<summary>Click to expand</summary><br/>
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
</details>
<p> </p>
## Evaluation
*This section describes the evaluation protocols and provides the results.*
<details>
<summary>Click to expand</summary><br/>
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.7
- Validation Loss: 3.1
- Perplexity: 21.9
(More evaluation scores forthcoming at the end of model training.)
</details>
<p> </p>
## Recommendations
*This section provides information on warnings and potential mitigations.*
<details>
<summary>Click to expand</summary><br/>
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
</details>
<p> </p>
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
<details>
<summary>Click to expand</summary><br/>
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
</details>
<p> </p>
## More Information
<details>
<summary>Click to expand</summary><br/>
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
</details>
<p> </p>
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos MuΓ±oz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana IliΔ, GΓ©rard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
| {} | RichardErkhov/bigscience_-_bloom-1b1-4bits | null | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-26T22:42:49+00:00 |
null | null | {"license": "mit"} | camembercik/haneul_kiof | null | [
"license:mit",
"region:us"
]
| null | 2024-04-26T22:42:55+00:00 |
|
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloom-1b1 - bnb 8bits
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bloom-1b1/
Original model description:
---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Data](#training-data)
4. [Risks and Limitations](#risks-and-limitations)
5. [Evaluation](#evaluation)
6. [Recommendations](#recommendations)
7. [Glossary and Calculations](#glossary-and-calculations)
8. [More Information](#more-information)
9. [Model Card Authors](#model-card-authors)
## Model Details
### Basics
*This section provides information for anyone who wants to know about the model.*
<details>
<summary>Click to expand</summary> <br/>
**Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
**Model Type:** Transformer-based Language Model
**Version:** 1.0.0
**Languages:** Multiple; see [training data](#training-data)
**License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
**Release Date Estimate:** Monday, 11.July.2022
**Send Questions to:** [email protected]
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
**Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
</details>
### Technical Specifications
*This section provides information for people who work on model development.*
<details>
<summary>Click to expand</summary><br/>
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 1,065,314,304 parameters:
* 385,351,680 embedding parameters
* 24 layers, 16 attention heads
* Hidden layers are 1536-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
#### **Training**
Training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11d-760M-logs)
- Number of epochs: 1
- Dates:
- Started 11th March, 2022 11:42am PST
- Ended 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)
- Server training location: Γle-de-France, France
#### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
</details>
### Environmental Impact
<details>
<summary>Click to expand</summary><br/>
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
</details>
<p> </p>
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
<details>
<summary>Click to expand</summary><br/>
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.Β The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
</details>
<p> </p>
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
<details>
<summary>Click to expand</summary><br/>
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

The following table shows the further distribution of Niger-Congo and Indic languages in the training data.
<details>
<summary>Click to expand</summary><br/>
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
The following table shows the distribution of programming languages.
<details>
<summary>Click to expand</summary><br/>
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
</details>
</details>
<p> </p>
## Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
<details>
<summary>Click to expand</summary><br/>
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
</details>
<p> </p>
## Evaluation
*This section describes the evaluation protocols and provides the results.*
<details>
<summary>Click to expand</summary><br/>
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.7
- Validation Loss: 3.1
- Perplexity: 21.9
(More evaluation scores forthcoming at the end of model training.)
</details>
<p> </p>
## Recommendations
*This section provides information on warnings and potential mitigations.*
<details>
<summary>Click to expand</summary><br/>
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
</details>
<p> </p>
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
<details>
<summary>Click to expand</summary><br/>
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
</details>
<p> </p>
## More Information
<details>
<summary>Click to expand</summary><br/>
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
</details>
<p> </p>
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos MuΓ±oz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana IliΔ, GΓ©rard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
| {} | RichardErkhov/bigscience_-_bloom-1b1-8bits | null | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
]
| null | 2024-04-26T22:44:10+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4237
- F1 Score: 0.8280
- Accuracy: 0.8280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4707 | 1.1 | 200 | 0.4291 | 0.8228 | 0.8228 |
| 0.4371 | 2.21 | 400 | 0.4204 | 0.8165 | 0.8173 |
| 0.4271 | 3.31 | 600 | 0.4264 | 0.8078 | 0.8096 |
| 0.4137 | 4.42 | 800 | 0.4192 | 0.8204 | 0.8211 |
| 0.411 | 5.52 | 1000 | 0.4207 | 0.8150 | 0.8166 |
| 0.4007 | 6.63 | 1200 | 0.4399 | 0.8105 | 0.8128 |
| 0.3999 | 7.73 | 1400 | 0.4176 | 0.8174 | 0.8187 |
| 0.3908 | 8.84 | 1600 | 0.4468 | 0.7974 | 0.8006 |
| 0.3868 | 9.94 | 1800 | 0.4142 | 0.8220 | 0.8221 |
| 0.3813 | 11.05 | 2000 | 0.4262 | 0.8174 | 0.8176 |
| 0.3789 | 12.15 | 2200 | 0.4150 | 0.8262 | 0.8263 |
| 0.3671 | 13.26 | 2400 | 0.4240 | 0.8222 | 0.8225 |
| 0.37 | 14.36 | 2600 | 0.4270 | 0.8283 | 0.8284 |
| 0.3653 | 15.47 | 2800 | 0.4309 | 0.8262 | 0.8266 |
| 0.3582 | 16.57 | 3000 | 0.4206 | 0.8243 | 0.8242 |
| 0.3558 | 17.68 | 3200 | 0.4275 | 0.8240 | 0.8242 |
| 0.353 | 18.78 | 3400 | 0.4302 | 0.8182 | 0.8190 |
| 0.3482 | 19.89 | 3600 | 0.4251 | 0.8245 | 0.8245 |
| 0.3458 | 20.99 | 3800 | 0.4363 | 0.8171 | 0.8176 |
| 0.3426 | 22.1 | 4000 | 0.4343 | 0.8218 | 0.8218 |
| 0.3387 | 23.2 | 4200 | 0.4497 | 0.8240 | 0.8242 |
| 0.3376 | 24.31 | 4400 | 0.4404 | 0.8164 | 0.8173 |
| 0.3276 | 25.41 | 4600 | 0.4517 | 0.8171 | 0.8169 |
| 0.3318 | 26.52 | 4800 | 0.4462 | 0.8161 | 0.8166 |
| 0.3271 | 27.62 | 5000 | 0.4527 | 0.8191 | 0.8197 |
| 0.3246 | 28.73 | 5200 | 0.4728 | 0.8073 | 0.8086 |
| 0.3195 | 29.83 | 5400 | 0.4470 | 0.8235 | 0.8235 |
| 0.319 | 30.94 | 5600 | 0.4466 | 0.8217 | 0.8218 |
| 0.3159 | 32.04 | 5800 | 0.4485 | 0.8236 | 0.8235 |
| 0.3139 | 33.15 | 6000 | 0.4624 | 0.8213 | 0.8214 |
| 0.3091 | 34.25 | 6200 | 0.4689 | 0.8156 | 0.8155 |
| 0.31 | 35.36 | 6400 | 0.4868 | 0.8170 | 0.8176 |
| 0.3063 | 36.46 | 6600 | 0.4621 | 0.8180 | 0.8183 |
| 0.3038 | 37.57 | 6800 | 0.4723 | 0.8163 | 0.8169 |
| 0.3037 | 38.67 | 7000 | 0.4809 | 0.8154 | 0.8159 |
| 0.3012 | 39.78 | 7200 | 0.4831 | 0.8214 | 0.8218 |
| 0.3 | 40.88 | 7400 | 0.4767 | 0.8175 | 0.8176 |
| 0.2954 | 41.99 | 7600 | 0.4719 | 0.8132 | 0.8135 |
| 0.2918 | 43.09 | 7800 | 0.4852 | 0.8149 | 0.8152 |
| 0.2916 | 44.2 | 8000 | 0.4888 | 0.8163 | 0.8166 |
| 0.2925 | 45.3 | 8200 | 0.4773 | 0.8154 | 0.8155 |
| 0.2948 | 46.41 | 8400 | 0.4780 | 0.8172 | 0.8173 |
| 0.2926 | 47.51 | 8600 | 0.4925 | 0.8150 | 0.8155 |
| 0.2859 | 48.62 | 8800 | 0.4869 | 0.8142 | 0.8145 |
| 0.284 | 49.72 | 9000 | 0.5006 | 0.8146 | 0.8148 |
| 0.2889 | 50.83 | 9200 | 0.4914 | 0.8129 | 0.8135 |
| 0.2856 | 51.93 | 9400 | 0.4951 | 0.8139 | 0.8141 |
| 0.2832 | 53.04 | 9600 | 0.4966 | 0.8142 | 0.8145 |
| 0.2822 | 54.14 | 9800 | 0.4966 | 0.8136 | 0.8138 |
| 0.2826 | 55.25 | 10000 | 0.4992 | 0.8135 | 0.8138 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T22:45:51+00:00 |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloom-1b1 - GGUF
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bloom-1b1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [bloom-1b1.Q2_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q2_K.gguf) | Q2_K | 0.66GB |
| [bloom-1b1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.IQ3_XS.gguf) | IQ3_XS | 0.73GB |
| [bloom-1b1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.IQ3_S.gguf) | IQ3_S | 0.73GB |
| [bloom-1b1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q3_K_S.gguf) | Q3_K_S | 0.73GB |
| [bloom-1b1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.IQ3_M.gguf) | IQ3_M | 0.77GB |
| [bloom-1b1.Q3_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q3_K.gguf) | Q3_K | 0.79GB |
| [bloom-1b1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q3_K_M.gguf) | Q3_K_M | 0.79GB |
| [bloom-1b1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [bloom-1b1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [bloom-1b1.Q4_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q4_0.gguf) | Q4_0 | 0.87GB |
| [bloom-1b1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.IQ4_NL.gguf) | IQ4_NL | 0.87GB |
| [bloom-1b1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q4_K_S.gguf) | Q4_K_S | 0.87GB |
| [bloom-1b1.Q4_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q4_K.gguf) | Q4_K | 0.91GB |
| [bloom-1b1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q4_K_M.gguf) | Q4_K_M | 0.91GB |
| [bloom-1b1.Q4_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q4_1.gguf) | Q4_1 | 0.93GB |
| [bloom-1b1.Q5_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q5_0.gguf) | Q5_0 | 0.99GB |
| [bloom-1b1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q5_K_S.gguf) | Q5_K_S | 0.99GB |
| [bloom-1b1.Q5_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q5_K.gguf) | Q5_K | 1.02GB |
| [bloom-1b1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q5_K_M.gguf) | Q5_K_M | 1.02GB |
| [bloom-1b1.Q5_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q5_1.gguf) | Q5_1 | 1.05GB |
| [bloom-1b1.Q6_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q6_K.gguf) | Q6_K | 1.12GB |
Original model description:
---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Data](#training-data)
4. [Risks and Limitations](#risks-and-limitations)
5. [Evaluation](#evaluation)
6. [Recommendations](#recommendations)
7. [Glossary and Calculations](#glossary-and-calculations)
8. [More Information](#more-information)
9. [Model Card Authors](#model-card-authors)
## Model Details
### Basics
*This section provides information for anyone who wants to know about the model.*
<details>
<summary>Click to expand</summary> <br/>
**Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
**Model Type:** Transformer-based Language Model
**Version:** 1.0.0
**Languages:** Multiple; see [training data](#training-data)
**License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
**Release Date Estimate:** Monday, 11.July.2022
**Send Questions to:** [email protected]
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
**Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
</details>
### Technical Specifications
*This section provides information for people who work on model development.*
<details>
<summary>Click to expand</summary><br/>
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 1,065,314,304 parameters:
* 385,351,680 embedding parameters
* 24 layers, 16 attention heads
* Hidden layers are 1536-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
#### **Training**
Training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11d-760M-logs)
- Number of epochs: 1
- Dates:
- Started 11th March, 2022 11:42am PST
- Ended 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)
- Server training location: Γle-de-France, France
#### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
</details>
### Environmental Impact
<details>
<summary>Click to expand</summary><br/>
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
</details>
<p> </p>
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
<details>
<summary>Click to expand</summary><br/>
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.Β The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
</details>
<p> </p>
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
<details>
<summary>Click to expand</summary><br/>
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

The following table shows the further distribution of Niger-Congo and Indic languages in the training data.
<details>
<summary>Click to expand</summary><br/>
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
The following table shows the distribution of programming languages.
<details>
<summary>Click to expand</summary><br/>
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
</details>
</details>
<p> </p>
## Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
<details>
<summary>Click to expand</summary><br/>
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
</details>
<p> </p>
## Evaluation
*This section describes the evaluation protocols and provides the results.*
<details>
<summary>Click to expand</summary><br/>
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.7
- Validation Loss: 3.1
- Perplexity: 21.9
(More evaluation scores forthcoming at the end of model training.)
</details>
<p> </p>
## Recommendations
*This section provides information on warnings and potential mitigations.*
<details>
<summary>Click to expand</summary><br/>
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
</details>
<p> </p>
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
<details>
<summary>Click to expand</summary><br/>
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
</details>
<p> </p>
## More Information
<details>
<summary>Click to expand</summary><br/>
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
</details>
<p> </p>
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos MuΓ±oz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana IliΔ, GΓ©rard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
| {} | RichardErkhov/bigscience_-_bloom-1b1-gguf | null | [
"gguf",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"region:us"
]
| null | 2024-04-26T22:46:01+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloom-1b7 - bnb 4bits
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bloom-1b7/
Original model description:
---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
# Model Card for Bloom-1b7
<!-- Provide a quick summary of what the model is/does. -->
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Recommendations](#recommendations)
5. [Training Data](#training-data)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#techincal-specifications)
9. [Citation](#citation)
10. [Glossary and Calculations](#glossary-and-calculations)
11. [More Information](#more-information)
12. [Model Card Authors](#model-card-authors)
13. [Model Card Contact](#model-card-contact)
## Model Details
### Model Description
*This section provides information for anyone who wants to know about the model.*
- **Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
- **Model Type:** Transformer-based Language Model
- **Version:** 1.0.0
- **Languages:** Multiple; see [training data](#training-data)
- **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
- **Release Date Estimate:** Monday, 11.July.2022
- **Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.Β The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
## Bias, Risks, and Limitations
*This section identifies foreseeable harms and misunderstandings.*
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
### Recommendations
*This section provides information on warnings and potential mitigations.*
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

**The following table shows the further distribution of Niger-Congo and Indic languages in the training data.**
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
**The following table shows the distribution of programming languages.**
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
## Evaluation
*This section describes the evaluation protocols and provides the results.*
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.0
- Validation Loss: 2.2
- Perplexity: 8.9
(More evaluation scores forthcoming at the end of model training.)
- [BLOOM Book](https://huggingface.co/spaces/bigscience/bloom-book): Read generations from BLOOM based on prompts provided by the community
## Environmental Impact
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
## Technical Specifications
*This section provides information for people who work on model development.*
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 1,722,408,960 parameters:
* 513,802,240 embedding parameters
* 24 layers, 16 attention heads
* Hidden layers are 2048-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 64 V100 16/32GB GPUs (16 nodes):
* 4 GPUs per node
* 40 CPUs per task
* 1 task per node
* CPU: AMD
* CPU memory: 160GB per node
* GPU memory: 64GB or 128GB (depending on node availability during training) per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
### **Training**
- Checkpoint size:
- Fp16 weights: 2.6GB (# params * 2)
- Full checkpoint with optimizer states: --
- Training throughput: --
- Number of epochs: 1
- Dates:
- Start: 11th March, 2022 11:42am PST
- End: 20 May, 2022
- Server training location: Γle-de-France, France
### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
## Citation
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
## More Information
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos MuΓ±oz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana IliΔ, GΓ©rard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
## Model Card Contact
**Send Questions to:** [email protected]
| {} | RichardErkhov/bigscience_-_bloom-1b7-4bits | null | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
]
| null | 2024-04-26T22:46:11+00:00 |
null | null | {} | sherrys/llama2_0.5_r256_alpha128 | null | [
"region:us"
]
| null | 2024-04-26T22:46:39+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_ASOPL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_ASOPL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_Instruction0_ASOPL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:47:25+00:00 |
text-to-video | transformers |
Hello! This is Kalaphant's AI Video Gen tool. | {"language": ["en"], "library_name": "transformers", "datasets": ["Kalaphant/Videos"], "metrics": ["accuracy"], "pipeline_tag": "text-to-video"} | Kalaphant/KalaVids | null | [
"transformers",
"transformer",
"text-to-video",
"en",
"dataset:Kalaphant/Videos",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:47:50+00:00 |
text-classification | transformers | {} | EinsZwo/nlid_mlm_ACTUALLY_mixed_supertagging-424_05 | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:48:39+00:00 |
|
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloom-1b7 - bnb 8bits
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bloom-1b7/
Original model description:
---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
# Model Card for Bloom-1b7
<!-- Provide a quick summary of what the model is/does. -->
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Recommendations](#recommendations)
5. [Training Data](#training-data)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#techincal-specifications)
9. [Citation](#citation)
10. [Glossary and Calculations](#glossary-and-calculations)
11. [More Information](#more-information)
12. [Model Card Authors](#model-card-authors)
13. [Model Card Contact](#model-card-contact)
## Model Details
### Model Description
*This section provides information for anyone who wants to know about the model.*
- **Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
- **Model Type:** Transformer-based Language Model
- **Version:** 1.0.0
- **Languages:** Multiple; see [training data](#training-data)
- **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
- **Release Date Estimate:** Monday, 11.July.2022
- **Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.Β The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
## Bias, Risks, and Limitations
*This section identifies foreseeable harms and misunderstandings.*
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
### Recommendations
*This section provides information on warnings and potential mitigations.*
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

**The following table shows the further distribution of Niger-Congo and Indic languages in the training data.**
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
**The following table shows the distribution of programming languages.**
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
## Evaluation
*This section describes the evaluation protocols and provides the results.*
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.0
- Validation Loss: 2.2
- Perplexity: 8.9
(More evaluation scores forthcoming at the end of model training.)
- [BLOOM Book](https://huggingface.co/spaces/bigscience/bloom-book): Read generations from BLOOM based on prompts provided by the community
## Environmental Impact
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
## Technical Specifications
*This section provides information for people who work on model development.*
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 1,722,408,960 parameters:
* 513,802,240 embedding parameters
* 24 layers, 16 attention heads
* Hidden layers are 2048-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 64 V100 16/32GB GPUs (16 nodes):
* 4 GPUs per node
* 40 CPUs per task
* 1 task per node
* CPU: AMD
* CPU memory: 160GB per node
* GPU memory: 64GB or 128GB (depending on node availability during training) per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
### **Training**
- Checkpoint size:
- Fp16 weights: 2.6GB (# params * 2)
- Full checkpoint with optimizer states: --
- Training throughput: --
- Number of epochs: 1
- Dates:
- Start: 11th March, 2022 11:42am PST
- End: 20 May, 2022
- Server training location: Γle-de-France, France
### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
## Citation
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
## More Information
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos MuΓ±oz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana IliΔ, GΓ©rard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
## Model Card Contact
**Send Questions to:** [email protected]
| {} | RichardErkhov/bigscience_-_bloom-1b7-8bits | null | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
]
| null | 2024-04-26T22:48:58+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6991 | 1.0 | 1314 | 3.5643 |
| 3.5819 | 2.0 | 2628 | 3.5568 |
| 3.5421 | 3.0 | 3942 | 3.5573 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "gpt2", "model-index": [{"name": "my_awesome_eli5_clm-model", "results": []}]} | BohanJiang/my_awesome_eli5_clm-model | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:52:19+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4939
- F1 Score: 0.7766
- Accuracy: 0.7784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5943 | 1.01 | 200 | 0.5666 | 0.7340 | 0.7377 |
| 0.5561 | 2.02 | 400 | 0.5471 | 0.7404 | 0.7434 |
| 0.5419 | 3.03 | 600 | 0.5372 | 0.7440 | 0.7459 |
| 0.5357 | 4.04 | 800 | 0.5346 | 0.7502 | 0.7519 |
| 0.5297 | 5.05 | 1000 | 0.5272 | 0.7573 | 0.7585 |
| 0.5238 | 6.06 | 1200 | 0.5244 | 0.7606 | 0.7626 |
| 0.5185 | 7.07 | 1400 | 0.5264 | 0.7634 | 0.7652 |
| 0.517 | 8.08 | 1600 | 0.5206 | 0.7634 | 0.7658 |
| 0.5133 | 9.09 | 1800 | 0.5169 | 0.7664 | 0.7683 |
| 0.5122 | 10.1 | 2000 | 0.5127 | 0.7698 | 0.7708 |
| 0.5065 | 11.11 | 2200 | 0.5191 | 0.7658 | 0.7677 |
| 0.507 | 12.12 | 2400 | 0.5073 | 0.7733 | 0.7743 |
| 0.5032 | 13.13 | 2600 | 0.5113 | 0.7706 | 0.7727 |
| 0.502 | 14.14 | 2800 | 0.5098 | 0.7700 | 0.7721 |
| 0.4986 | 15.15 | 3000 | 0.5093 | 0.7739 | 0.7756 |
| 0.498 | 16.16 | 3200 | 0.5080 | 0.7726 | 0.7746 |
| 0.4937 | 17.17 | 3400 | 0.5156 | 0.7691 | 0.7711 |
| 0.4984 | 18.18 | 3600 | 0.5022 | 0.7748 | 0.7756 |
| 0.4921 | 19.19 | 3800 | 0.5046 | 0.7721 | 0.7740 |
| 0.4903 | 20.2 | 4000 | 0.5061 | 0.7763 | 0.7778 |
| 0.493 | 21.21 | 4200 | 0.5116 | 0.7671 | 0.7705 |
| 0.4891 | 22.22 | 4400 | 0.5099 | 0.7749 | 0.7765 |
| 0.488 | 23.23 | 4600 | 0.5085 | 0.7699 | 0.7724 |
| 0.4916 | 24.24 | 4800 | 0.5057 | 0.7667 | 0.7696 |
| 0.485 | 25.25 | 5000 | 0.5042 | 0.7776 | 0.7784 |
| 0.4873 | 26.26 | 5200 | 0.5030 | 0.7792 | 0.7800 |
| 0.4822 | 27.27 | 5400 | 0.5046 | 0.7718 | 0.7740 |
| 0.4877 | 28.28 | 5600 | 0.5023 | 0.7764 | 0.7771 |
| 0.4872 | 29.29 | 5800 | 0.5102 | 0.7637 | 0.7674 |
| 0.4822 | 30.3 | 6000 | 0.5025 | 0.7773 | 0.7790 |
| 0.481 | 31.31 | 6200 | 0.5017 | 0.7793 | 0.7806 |
| 0.4839 | 32.32 | 6400 | 0.5070 | 0.7677 | 0.7705 |
| 0.4813 | 33.33 | 6600 | 0.5057 | 0.7736 | 0.7756 |
| 0.4783 | 34.34 | 6800 | 0.5045 | 0.7740 | 0.7762 |
| 0.4755 | 35.35 | 7000 | 0.5018 | 0.7760 | 0.7775 |
| 0.4832 | 36.36 | 7200 | 0.5001 | 0.7745 | 0.7759 |
| 0.4776 | 37.37 | 7400 | 0.5010 | 0.7761 | 0.7775 |
| 0.4765 | 38.38 | 7600 | 0.5030 | 0.7778 | 0.7787 |
| 0.4765 | 39.39 | 7800 | 0.5011 | 0.7774 | 0.7787 |
| 0.475 | 40.4 | 8000 | 0.5023 | 0.7780 | 0.7794 |
| 0.474 | 41.41 | 8200 | 0.5043 | 0.7714 | 0.7737 |
| 0.48 | 42.42 | 8400 | 0.5040 | 0.7747 | 0.7768 |
| 0.472 | 43.43 | 8600 | 0.5030 | 0.7762 | 0.7778 |
| 0.4766 | 44.44 | 8800 | 0.5021 | 0.7756 | 0.7775 |
| 0.4734 | 45.45 | 9000 | 0.5029 | 0.7785 | 0.7800 |
| 0.4763 | 46.46 | 9200 | 0.5029 | 0.7768 | 0.7784 |
| 0.4764 | 47.47 | 9400 | 0.5021 | 0.7759 | 0.7778 |
| 0.4682 | 48.48 | 9600 | 0.5038 | 0.7757 | 0.7775 |
| 0.4802 | 49.49 | 9800 | 0.5024 | 0.7757 | 0.7775 |
| 0.4731 | 50.51 | 10000 | 0.5023 | 0.7771 | 0.7787 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T22:52:43+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4107
- F1 Score: 0.8317
- Accuracy: 0.8318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4657 | 1.1 | 200 | 0.4243 | 0.8232 | 0.8235 |
| 0.4324 | 2.21 | 400 | 0.4146 | 0.8226 | 0.8232 |
| 0.419 | 3.31 | 600 | 0.4131 | 0.8177 | 0.8187 |
| 0.4025 | 4.42 | 800 | 0.4374 | 0.8184 | 0.8197 |
| 0.3946 | 5.52 | 1000 | 0.4139 | 0.8245 | 0.8256 |
| 0.3836 | 6.63 | 1200 | 0.4511 | 0.8067 | 0.8093 |
| 0.3801 | 7.73 | 1400 | 0.4200 | 0.8173 | 0.8187 |
| 0.3655 | 8.84 | 1600 | 0.4493 | 0.8072 | 0.8096 |
| 0.3577 | 9.94 | 1800 | 0.4156 | 0.8287 | 0.8287 |
| 0.349 | 11.05 | 2000 | 0.4461 | 0.8216 | 0.8214 |
| 0.3388 | 12.15 | 2200 | 0.4382 | 0.8202 | 0.8207 |
| 0.3232 | 13.26 | 2400 | 0.4517 | 0.8253 | 0.8252 |
| 0.3228 | 14.36 | 2600 | 0.4483 | 0.8233 | 0.8232 |
| 0.3097 | 15.47 | 2800 | 0.4711 | 0.8175 | 0.8180 |
| 0.2971 | 16.57 | 3000 | 0.4563 | 0.8274 | 0.8273 |
| 0.2892 | 17.68 | 3200 | 0.4694 | 0.8214 | 0.8214 |
| 0.2808 | 18.78 | 3400 | 0.5045 | 0.8169 | 0.8176 |
| 0.2734 | 19.89 | 3600 | 0.4833 | 0.8252 | 0.8252 |
| 0.2627 | 20.99 | 3800 | 0.5058 | 0.8135 | 0.8138 |
| 0.2505 | 22.1 | 4000 | 0.5084 | 0.8159 | 0.8159 |
| 0.2465 | 23.2 | 4200 | 0.5335 | 0.8198 | 0.8200 |
| 0.2375 | 24.31 | 4400 | 0.5538 | 0.8065 | 0.8076 |
| 0.2215 | 25.41 | 4600 | 0.5602 | 0.8187 | 0.8190 |
| 0.2222 | 26.52 | 4800 | 0.5591 | 0.8140 | 0.8141 |
| 0.2141 | 27.62 | 5000 | 0.5945 | 0.8170 | 0.8173 |
| 0.2076 | 28.73 | 5200 | 0.6232 | 0.8062 | 0.8076 |
| 0.1993 | 29.83 | 5400 | 0.6059 | 0.8126 | 0.8131 |
| 0.1939 | 30.94 | 5600 | 0.5995 | 0.8103 | 0.8103 |
| 0.1895 | 32.04 | 5800 | 0.6070 | 0.8108 | 0.8107 |
| 0.1823 | 33.15 | 6000 | 0.6381 | 0.8131 | 0.8131 |
| 0.1745 | 34.25 | 6200 | 0.6827 | 0.8042 | 0.8044 |
| 0.1749 | 35.36 | 6400 | 0.7136 | 0.8124 | 0.8131 |
| 0.1688 | 36.46 | 6600 | 0.6665 | 0.8150 | 0.8155 |
| 0.1652 | 37.57 | 6800 | 0.6561 | 0.8112 | 0.8114 |
| 0.1603 | 38.67 | 7000 | 0.6930 | 0.8126 | 0.8131 |
| 0.1634 | 39.78 | 7200 | 0.6943 | 0.8138 | 0.8141 |
| 0.1508 | 40.88 | 7400 | 0.7126 | 0.8124 | 0.8124 |
| 0.1477 | 41.99 | 7600 | 0.7014 | 0.8085 | 0.8086 |
| 0.1395 | 43.09 | 7800 | 0.7491 | 0.8117 | 0.8121 |
| 0.1377 | 44.2 | 8000 | 0.7569 | 0.8102 | 0.8103 |
| 0.1428 | 45.3 | 8200 | 0.7091 | 0.8114 | 0.8114 |
| 0.1438 | 46.41 | 8400 | 0.7434 | 0.8128 | 0.8131 |
| 0.1383 | 47.51 | 8600 | 0.7801 | 0.8052 | 0.8058 |
| 0.1333 | 48.62 | 8800 | 0.7809 | 0.8063 | 0.8065 |
| 0.1294 | 49.72 | 9000 | 0.8066 | 0.8081 | 0.8083 |
| 0.1289 | 50.83 | 9200 | 0.7828 | 0.8084 | 0.8086 |
| 0.1257 | 51.93 | 9400 | 0.8019 | 0.8066 | 0.8069 |
| 0.1273 | 53.04 | 9600 | 0.7990 | 0.8099 | 0.8100 |
| 0.1216 | 54.14 | 9800 | 0.8111 | 0.8128 | 0.8131 |
| 0.1218 | 55.25 | 10000 | 0.8162 | 0.8122 | 0.8124 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
]
| null | 2024-04-26T22:52:43+00:00 |
text-generation | null |
# Crow13/nvnv-alice-Q8_0-GGUF
This model was converted to GGUF format from [`instructkr/nvnv-alice`](https://huggingface.co/instructkr/nvnv-alice) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/instructkr/nvnv-alice) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Crow13/nvnv-alice-Q8_0-GGUF --model nvnv-alice.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Crow13/nvnv-alice-Q8_0-GGUF --model nvnv-alice.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m nvnv-alice.Q8_0.gguf -n 128
```
| {"language": ["ko"], "license": ["cc-by-nc-4.0"], "tags": ["llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation"} | Crow13/nvnv-alice-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-04-26T22:54:17+00:00 |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloom-1b7 - GGUF
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bloom-1b7/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [bloom-1b7.Q2_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q2_K.gguf) | Q2_K | 0.98GB |
| [bloom-1b7.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ3_XS.gguf) | IQ3_XS | 1.08GB |
| [bloom-1b7.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ3_S.gguf) | IQ3_S | 1.1GB |
| [bloom-1b7.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q3_K_S.gguf) | Q3_K_S | 1.1GB |
| [bloom-1b7.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ3_M.gguf) | IQ3_M | 1.15GB |
| [bloom-1b7.Q3_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q3_K.gguf) | Q3_K | 1.2GB |
| [bloom-1b7.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q3_K_M.gguf) | Q3_K_M | 1.2GB |
| [bloom-1b7.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q3_K_L.gguf) | Q3_K_L | 1.25GB |
| [bloom-1b7.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ4_XS.gguf) | IQ4_XS | 1.27GB |
| [bloom-1b7.Q4_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_0.gguf) | Q4_0 | 1.31GB |
| [bloom-1b7.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.IQ4_NL.gguf) | IQ4_NL | 1.31GB |
| [bloom-1b7.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_K_S.gguf) | Q4_K_S | 1.31GB |
| [bloom-1b7.Q4_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_K.gguf) | Q4_K | 1.39GB |
| [bloom-1b7.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_K_M.gguf) | Q4_K_M | 1.39GB |
| [bloom-1b7.Q4_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q4_1.gguf) | Q4_1 | 1.41GB |
| [bloom-1b7.Q5_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_0.gguf) | Q5_0 | 1.51GB |
| [bloom-1b7.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_K_S.gguf) | Q5_K_S | 1.51GB |
| [bloom-1b7.Q5_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_K.gguf) | Q5_K | 1.57GB |
| [bloom-1b7.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_K_M.gguf) | Q5_K_M | 1.57GB |
| [bloom-1b7.Q5_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q5_1.gguf) | Q5_1 | 1.61GB |
| [bloom-1b7.Q6_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b7-gguf/blob/main/bloom-1b7.Q6_K.gguf) | Q6_K | 1.72GB |
Original model description:
---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
# Model Card for Bloom-1b7
<!-- Provide a quick summary of what the model is/does. -->
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Recommendations](#recommendations)
5. [Training Data](#training-data)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#techincal-specifications)
9. [Citation](#citation)
10. [Glossary and Calculations](#glossary-and-calculations)
11. [More Information](#more-information)
12. [Model Card Authors](#model-card-authors)
13. [Model Card Contact](#model-card-contact)
## Model Details
### Model Description
*This section provides information for anyone who wants to know about the model.*
- **Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
- **Model Type:** Transformer-based Language Model
- **Version:** 1.0.0
- **Languages:** Multiple; see [training data](#training-data)
- **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
- **Release Date Estimate:** Monday, 11.July.2022
- **Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.Β The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
## Bias, Risks, and Limitations
*This section identifies foreseeable harms and misunderstandings.*
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
### Recommendations
*This section provides information on warnings and potential mitigations.*
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

**The following table shows the further distribution of Niger-Congo and Indic languages in the training data.**
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
**The following table shows the distribution of programming languages.**
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
## Evaluation
*This section describes the evaluation protocols and provides the results.*
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.0
- Validation Loss: 2.2
- Perplexity: 8.9
(More evaluation scores forthcoming at the end of model training.)
- [BLOOM Book](https://huggingface.co/spaces/bigscience/bloom-book): Read generations from BLOOM based on prompts provided by the community
## Environmental Impact
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
## Technical Specifications
*This section provides information for people who work on model development.*
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 1,722,408,960 parameters:
* 513,802,240 embedding parameters
* 24 layers, 16 attention heads
* Hidden layers are 2048-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 64 V100 16/32GB GPUs (16 nodes):
* 4 GPUs per node
* 40 CPUs per task
* 1 task per node
* CPU: AMD
* CPU memory: 160GB per node
* GPU memory: 64GB or 128GB (depending on node availability during training) per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
### **Training**
- Checkpoint size:
- Fp16 weights: 2.6GB (# params * 2)
- Full checkpoint with optimizer states: --
- Training throughput: --
- Number of epochs: 1
- Dates:
- Start: 11th March, 2022 11:42am PST
- End: 20 May, 2022
- Server training location: Γle-de-France, France
### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
## Citation
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
## More Information
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos MuΓ±oz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana IliΔ, GΓ©rard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
## Model Card Contact
**Send Questions to:** [email protected]
| {} | RichardErkhov/bigscience_-_bloom-1b7-gguf | null | [
"gguf",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"region:us"
]
| null | 2024-04-26T22:54:23+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-qa-squadv2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-finetuned-qa-squadv2", "results": []}]} | momo345/bert-finetuned-qa-squadv2 | null | [
"transformers",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:54:24+00:00 |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [aaditya/OpenBioLLM-Llama3-8B](https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B)
* [o2satz/WS_med_QA_Dolphin](https://huggingface.co/o2satz/WS_med_QA_Dolphin)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: o2satz/WS_med_QA_Dolphin
layer_range:
- 0
- 32
- model: aaditya/OpenBioLLM-Llama3-8B
layer_range:
- 0
- 32
merge_method: slerp
base_model: o2satz/WS_med_QA_Dolphin
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["aaditya/OpenBioLLM-Llama3-8B", "o2satz/WS_med_QA_Dolphin"]} | o2satz/WS_med_QA_DolphinBioLLM | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:aaditya/OpenBioLLM-Llama3-8B",
"base_model:o2satz/WS_med_QA_Dolphin",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:54:31+00:00 |
null | null | {"license": "openrail"} | vicmbcc/belindapop | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-26T22:55:17+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/o2h629g | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:56:10+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/g9ztueq | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:56:10+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/qyef935 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:56:10+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/wf7sbh1 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:56:10+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/lu09vm4 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:56:10+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/kct4vs9 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T22:56:10+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amtibot_t5
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3926
- Rouge1: 0.3075
- Rouge2: 0.1254
- Rougel: 0.2587
- Rougelsum: 0.2591
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 0.92 | 9 | 2.9455 | 0.3081 | 0.128 | 0.265 | 0.2643 | 19.0 |
| No log | 1.95 | 19 | 2.5732 | 0.3069 | 0.1305 | 0.2575 | 0.257 | 19.0 |
| No log | 2.97 | 29 | 2.4209 | 0.3039 | 0.1243 | 0.2548 | 0.2559 | 19.0 |
| No log | 3.69 | 36 | 2.3926 | 0.3075 | 0.1254 | 0.2587 | 0.2591 | 19.0 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-base", "model-index": [{"name": "amtibot_t5", "results": []}]} | josiahgottfried/amtibot_t5 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:56:32+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_3
This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2](https://huggingface.co/ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2", "model-index": [{"name": "0.001_4iters_bs128_nodpo_only4w_userresponse_iter_3", "results": []}]} | ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:56:56+00:00 |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Joalbom14/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
| {"library_name": "ml-agents", "tags": ["SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget"]} | Joalbom14/ppo-SnowballTarget | null | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| null | 2024-04-26T22:57:37+00:00 |
text-generation | transformers | # ExperimentOne (Mistral-Hermes-Dolphin-7b)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
``` | {"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["mistralai/Mistral-7B-v0.1", "cognitivecomputations/dolphin-2.8-mistral-7b-v02", "NousResearch/Hermes-2-Pro-Mistral-7B"]} | TitleOS/ExperimentOne | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:59:14+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | devesh220897/financial-chatbot-for-young-adults-1 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T22:59:42+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-chat-hf_fictional_arc_easy_english_v3
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 18
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "llama2", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "Llama-2-7b-chat-hf_fictional_arc_easy_english_v3", "results": []}]} | yzhuang/Llama-2-7b-chat-hf_fictional_arc_easy_english_v3 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T23:00:59+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_SOPAL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_SOPAL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_Instruction0_SOPAL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T23:01:16+00:00 |
text-generation | transformers | {"license": "mit"} | flpelerin/char-tokenizer | null | [
"transformers",
"gpt_neox",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T23:01:18+00:00 |
|
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | EinsZwo/nlid_mlm_supertag050-424-sanitysaveaftertrain | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T23:01:21+00:00 |
audio-to-audio | null | 
# Cam - Camaron Marvel Ochs [EN] (2020)
# 354 Epochs - RVC V2 - rmvpe
Trained on 11 minutes 36 seconds of isolated acapellas from The Otherside album using UVR (Voc FT + Reverb HQ)
and Audacity to remove parts with double vocals and vocals from others (+Noise Gate) | {"language": ["en"], "license": "openrail", "tags": ["music", "rvc", "cam", "camaron", "marvel", "ochs", "model"], "pipeline_tag": "audio-to-audio"} | JapGuy/Cam | null | [
"music",
"rvc",
"cam",
"camaron",
"marvel",
"ochs",
"model",
"audio-to-audio",
"en",
"license:openrail",
"region:us"
]
| null | 2024-04-26T23:03:03+00:00 |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | p-alonso/t5-small-es-ast | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T23:04:39+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]} | basakerdogan/cyber-jarvis-4bit-2 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T23:04:58+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_Instruction0_OSPAL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_OSPAL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_Instruction0_OSPAL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T23:05:30+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3
This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2](https://huggingface.co/ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3", "results": []}]} | ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-26T23:06:43+00:00 |
null | transformers | {} | luonluonvn/m2m100_418_int8_ct2 | null | [
"transformers",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T23:07:29+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.