pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
liquid9212/1rtdb86
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:00:13+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** WeOneGuy - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-bnb-4bit"}
WeOneGuy/mistral-7b-alpaca
null
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:00:14+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: WeOneGuy - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: WeOneGuy\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: WeOneGuy\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
quickstep3621/cxo3sk6
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:00:17+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/1pem3u5
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:00:55+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_4iters_bs256_nodpo_only4w_iter_1 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_iter_1", "results": []}]}
ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_1
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T21:01:45+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.001_4iters_bs256_nodpo_only4w_iter_1 This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
[ "# 0.001_4iters_bs256_nodpo_only4w_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.001_4iters_bs256_nodpo_only4w_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K9ac-seqsight_4096_512_46M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.4860 - F1 Score: 0.7868 - Accuracy: 0.7863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5557 | 1.15 | 200 | 0.5435 | 0.7278 | 0.7272 | | 0.5083 | 2.3 | 400 | 0.5559 | 0.7251 | 0.7280 | | 0.4847 | 3.45 | 600 | 0.5117 | 0.7588 | 0.7585 | | 0.4722 | 4.6 | 800 | 0.4942 | 0.7637 | 0.7632 | | 0.4661 | 5.75 | 1000 | 0.4936 | 0.7687 | 0.7683 | | 0.4575 | 6.9 | 1200 | 0.4923 | 0.7702 | 0.7697 | | 0.4504 | 8.05 | 1400 | 0.5031 | 0.7624 | 0.7621 | | 0.442 | 9.2 | 1600 | 0.4930 | 0.7698 | 0.7697 | | 0.4356 | 10.34 | 1800 | 0.4876 | 0.7700 | 0.7697 | | 0.434 | 11.49 | 2000 | 0.4839 | 0.7726 | 0.7722 | | 0.4251 | 12.64 | 2200 | 0.4829 | 0.7725 | 0.7726 | | 0.4233 | 13.79 | 2400 | 0.4823 | 0.7755 | 0.7751 | | 0.4205 | 14.94 | 2600 | 0.4722 | 0.7765 | 0.7765 | | 0.4118 | 16.09 | 2800 | 0.4861 | 0.7733 | 0.7729 | | 0.4088 | 17.24 | 3000 | 0.4833 | 0.7799 | 0.7794 | | 0.4075 | 18.39 | 3200 | 0.4762 | 0.7748 | 0.7744 | | 0.4032 | 19.54 | 3400 | 0.4768 | 0.7716 | 0.7711 | | 0.3952 | 20.69 | 3600 | 0.4839 | 0.7788 | 0.7791 | | 0.3926 | 21.84 | 3800 | 0.4781 | 0.7741 | 0.7737 | | 0.391 | 22.99 | 4000 | 0.5085 | 0.7598 | 0.7603 | | 0.3901 | 24.14 | 4200 | 0.4865 | 0.7719 | 0.7715 | | 0.3786 | 25.29 | 4400 | 0.5031 | 0.7738 | 0.7733 | | 0.3817 | 26.44 | 4600 | 0.4994 | 0.7695 | 0.7690 | | 0.381 | 27.59 | 4800 | 0.4967 | 0.7763 | 0.7758 | | 0.374 | 28.74 | 5000 | 0.4907 | 0.7727 | 0.7722 | | 0.3769 | 29.89 | 5200 | 0.5001 | 0.7741 | 0.7737 | | 0.3672 | 31.03 | 5400 | 0.5043 | 0.7671 | 0.7668 | | 0.3688 | 32.18 | 5600 | 0.5008 | 0.7745 | 0.7740 | | 0.3603 | 33.33 | 5800 | 0.5100 | 0.7799 | 0.7794 | | 0.3643 | 34.48 | 6000 | 0.4972 | 0.7741 | 0.7737 | | 0.3533 | 35.63 | 6200 | 0.5166 | 0.7758 | 0.7755 | | 0.3604 | 36.78 | 6400 | 0.5027 | 0.7749 | 0.7744 | | 0.3553 | 37.93 | 6600 | 0.5220 | 0.7687 | 0.7683 | | 0.35 | 39.08 | 6800 | 0.5126 | 0.7741 | 0.7737 | | 0.3499 | 40.23 | 7000 | 0.5196 | 0.7677 | 0.7672 | | 0.3457 | 41.38 | 7200 | 0.5229 | 0.7684 | 0.7679 | | 0.3458 | 42.53 | 7400 | 0.5237 | 0.7684 | 0.7679 | | 0.3435 | 43.68 | 7600 | 0.5272 | 0.7708 | 0.7704 | | 0.3402 | 44.83 | 7800 | 0.5261 | 0.7709 | 0.7704 | | 0.3401 | 45.98 | 8000 | 0.5282 | 0.7696 | 0.7693 | | 0.3397 | 47.13 | 8200 | 0.5327 | 0.7655 | 0.7650 | | 0.3374 | 48.28 | 8400 | 0.5306 | 0.7691 | 0.7686 | | 0.3336 | 49.43 | 8600 | 0.5371 | 0.7659 | 0.7654 | | 0.335 | 50.57 | 8800 | 0.5357 | 0.7687 | 0.7683 | | 0.3384 | 51.72 | 9000 | 0.5340 | 0.7695 | 0.7690 | | 0.3308 | 52.87 | 9200 | 0.5367 | 0.7666 | 0.7661 | | 0.3318 | 54.02 | 9400 | 0.5352 | 0.7677 | 0.7672 | | 0.3341 | 55.17 | 9600 | 0.5344 | 0.7659 | 0.7654 | | 0.3304 | 56.32 | 9800 | 0.5349 | 0.7673 | 0.7668 | | 0.3319 | 57.47 | 10000 | 0.5345 | 0.7673 | 0.7668 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_4096_512_46M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_4096_512_46M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T21:01:47+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H3K9ac-seqsight\_4096\_512\_46M-L8\_f =============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset. It achieves the following results on the evaluation set: * Loss: 0.4860 * F1 Score: 0.7868 * Accuracy: 0.7863 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
harir/mistral-7b-instruct-v0.1-review-toxicity
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T21:02:42+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
sherrys/mistralRAFT_50e
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-26T21:04:08+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K9ac-seqsight_4096_512_46M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.4608 - F1 Score: 0.7911 - Accuracy: 0.7906 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5428 | 1.15 | 200 | 0.5428 | 0.7374 | 0.7373 | | 0.4911 | 2.3 | 400 | 0.5192 | 0.7412 | 0.7424 | | 0.4694 | 3.45 | 600 | 0.5180 | 0.7467 | 0.7474 | | 0.4552 | 4.6 | 800 | 0.4854 | 0.7676 | 0.7672 | | 0.4478 | 5.75 | 1000 | 0.4899 | 0.7629 | 0.7625 | | 0.4362 | 6.9 | 1200 | 0.4816 | 0.7806 | 0.7801 | | 0.4286 | 8.05 | 1400 | 0.4899 | 0.7714 | 0.7711 | | 0.4131 | 9.2 | 1600 | 0.5043 | 0.7677 | 0.7675 | | 0.4042 | 10.34 | 1800 | 0.5029 | 0.7677 | 0.7675 | | 0.3993 | 11.49 | 2000 | 0.4941 | 0.7762 | 0.7758 | | 0.3845 | 12.64 | 2200 | 0.4977 | 0.7681 | 0.7679 | | 0.3813 | 13.79 | 2400 | 0.5050 | 0.7671 | 0.7672 | | 0.3701 | 14.94 | 2600 | 0.5067 | 0.7630 | 0.7639 | | 0.3569 | 16.09 | 2800 | 0.5451 | 0.7525 | 0.7531 | | 0.3492 | 17.24 | 3000 | 0.5157 | 0.7690 | 0.7686 | | 0.3422 | 18.39 | 3200 | 0.5235 | 0.7674 | 0.7672 | | 0.3334 | 19.54 | 3400 | 0.5483 | 0.7607 | 0.7603 | | 0.3224 | 20.69 | 3600 | 0.5445 | 0.7689 | 0.7686 | | 0.3144 | 21.84 | 3800 | 0.5174 | 0.7727 | 0.7722 | | 0.3057 | 22.99 | 4000 | 0.5967 | 0.7518 | 0.7524 | | 0.304 | 24.14 | 4200 | 0.5790 | 0.7580 | 0.7575 | | 0.2867 | 25.29 | 4400 | 0.5979 | 0.7588 | 0.7589 | | 0.2816 | 26.44 | 4600 | 0.5985 | 0.7637 | 0.7632 | | 0.2795 | 27.59 | 4800 | 0.6115 | 0.7708 | 0.7704 | | 0.2665 | 28.74 | 5000 | 0.6015 | 0.7566 | 0.7564 | | 0.2717 | 29.89 | 5200 | 0.5972 | 0.7655 | 0.7650 | | 0.2551 | 31.03 | 5400 | 0.6186 | 0.7604 | 0.7600 | | 0.248 | 32.18 | 5600 | 0.6615 | 0.7590 | 0.7585 | | 0.2432 | 33.33 | 5800 | 0.6447 | 0.7752 | 0.7747 | | 0.237 | 34.48 | 6000 | 0.6588 | 0.7666 | 0.7661 | | 0.2305 | 35.63 | 6200 | 0.6836 | 0.7612 | 0.7607 | | 0.2316 | 36.78 | 6400 | 0.6486 | 0.7651 | 0.7647 | | 0.2246 | 37.93 | 6600 | 0.6591 | 0.7580 | 0.7575 | | 0.2174 | 39.08 | 6800 | 0.6870 | 0.7594 | 0.7589 | | 0.2112 | 40.23 | 7000 | 0.6890 | 0.7590 | 0.7585 | | 0.2073 | 41.38 | 7200 | 0.7309 | 0.7508 | 0.7503 | | 0.206 | 42.53 | 7400 | 0.7128 | 0.7547 | 0.7542 | | 0.2043 | 43.68 | 7600 | 0.7207 | 0.7630 | 0.7625 | | 0.1981 | 44.83 | 7800 | 0.7241 | 0.7512 | 0.7506 | | 0.195 | 45.98 | 8000 | 0.7531 | 0.7499 | 0.7496 | | 0.194 | 47.13 | 8200 | 0.7291 | 0.7522 | 0.7517 | | 0.1869 | 48.28 | 8400 | 0.7713 | 0.7565 | 0.7560 | | 0.184 | 49.43 | 8600 | 0.7801 | 0.7565 | 0.7560 | | 0.186 | 50.57 | 8800 | 0.7840 | 0.7583 | 0.7578 | | 0.1861 | 51.72 | 9000 | 0.7701 | 0.7576 | 0.7571 | | 0.1811 | 52.87 | 9200 | 0.7714 | 0.7590 | 0.7585 | | 0.1827 | 54.02 | 9400 | 0.7581 | 0.7562 | 0.7557 | | 0.1784 | 55.17 | 9600 | 0.7658 | 0.7558 | 0.7553 | | 0.1766 | 56.32 | 9800 | 0.7785 | 0.7569 | 0.7564 | | 0.1769 | 57.47 | 10000 | 0.7781 | 0.7576 | 0.7571 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_4096_512_46M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_4096_512_46M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T21:04:44+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H3K9ac-seqsight\_4096\_512\_46M-L32\_f ================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset. It achieves the following results on the evaluation set: * Loss: 0.4608 * F1 Score: 0.7911 * Accuracy: 0.7906 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/gko6wa8
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T21:05:36+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# griffin-llama3t-8L-v0.02-fineweb Pretraining experiment with griffin/recurrent_gemma arch. This one uses the Llama-3 tokenizer. ## Model description Further training of [pszemraj/griffin-1024-llama3t-8layer-simplewiki-silu](https://huggingface.co/pszemraj/griffin-1024-llama3t-8layer-simplewiki-silu) on the BEE-spoke-data/fineweb-1M_en-med dataset. It achieves the following results on the evaluation set: - Loss: 5.6538 - Accuracy: 0.1881 - Num Input Tokens Seen: 766509056 ## evals tl;dr its bad/would need more training: hf (pretrained=pszemraj/griffin-llama3t-8L-v0.02-fineweb,trust_remote_code=True,dtype=float), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 4 | Tasks |Version|Filter|n-shot| Metric | Value | | Stderr | |--------------|------:|------|-----:|----------|----------:|---|---------:| |winogrande | 1|none | 0|acc | 0.4964|± | 0.0141| |piqa | 1|none | 0|acc | 0.5332|± | 0.0116| | | |none | 0|acc_norm | 0.5299|± | 0.0116| |openbookqa | 1|none | 0|acc | 0.1280|± | 0.0150| | | |none | 0|acc_norm | 0.2320|± | 0.0189| |lambada_openai| 1|none | 0|perplexity|638060.0702|± |43608.0044| | | |none | 0|acc | 0.0000|± | 0.0000| |boolq | 2|none | 0|acc | 0.3783|± | 0.0085| |arc_easy | 1|none | 0|acc | 0.2614|± | 0.0090| | | |none | 0|acc_norm | 0.2744|± | 0.0092| ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 2 - seed: 80085 - gradient_accumulation_steps: 32 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:| | 6.4019 | 0.0684 | 400 | 6.7690 | 0.1278 | 52428800 | | 6.0547 | 0.1368 | 800 | 6.4214 | 0.1460 | 104857600 | | 5.8133 | 0.2052 | 1200 | 6.2566 | 0.1550 | 157286400 | | 5.7212 | 0.2736 | 1600 | 6.1411 | 0.1620 | 209715200 | | 5.6175 | 0.3420 | 2000 | 6.0502 | 0.1669 | 262144000 | | 5.5014 | 0.4104 | 2400 | 5.9827 | 0.1687 | 314572800 | | 5.4882 | 0.4788 | 2800 | 5.9203 | 0.1731 | 367001600 | | 5.3972 | 0.5472 | 3200 | 5.8614 | 0.1782 | 419430400 | | 5.3983 | 0.6156 | 3600 | 5.8340 | 0.1773 | 471859200 | | 5.3175 | 0.6840 | 4000 | 5.7916 | 0.1814 | 524288000 | | 5.3014 | 0.7524 | 4400 | 5.7565 | 0.1814 | 576716800 | | 5.2749 | 0.8208 | 4800 | 5.7303 | 0.1849 | 629145600 | | 5.2264 | 0.8892 | 5200 | 5.6993 | 0.1850 | 681574400 | | 5.2107 | 0.9576 | 5600 | 5.6745 | 0.1884 | 734003200 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["BEE-spoke-data/fineweb-1M_en-med"], "metrics": ["accuracy"], "base_model": "pszemraj/griffin-1024-llama3t-8layer-simplewiki-silu", "model-index": [{"name": "griffin-1024-llama3t-8layer-simplewiki-silu-fineweb-1M_en-med-vN", "results": []}]}
pszemraj/griffin-llama3t-8L-v0.02-fineweb
null
[ "transformers", "safetensors", "recurrent_gemma", "text-generation", "generated_from_trainer", "en", "dataset:BEE-spoke-data/fineweb-1M_en-med", "base_model:pszemraj/griffin-1024-llama3t-8layer-simplewiki-silu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:06:07+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #recurrent_gemma #text-generation #generated_from_trainer #en #dataset-BEE-spoke-data/fineweb-1M_en-med #base_model-pszemraj/griffin-1024-llama3t-8layer-simplewiki-silu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
griffin-llama3t-8L-v0.02-fineweb ================================ Pretraining experiment with griffin/recurrent\_gemma arch. This one uses the Llama-3 tokenizer. Model description ----------------- Further training of pszemraj/griffin-1024-llama3t-8layer-simplewiki-silu on the BEE-spoke-data/fineweb-1M\_en-med dataset. It achieves the following results on the evaluation set: * Loss: 5.6538 * Accuracy: 0.1881 * Num Input Tokens Seen: 766509056 evals ----- tl;dr its bad/would need more training: hf (pretrained=pszemraj/griffin-llama3t-8L-v0.02-fineweb,trust\_remote\_code=True,dtype=float), gen\_kwargs: (None), limit: None, num\_fewshot: None, batch\_size: 4 Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 80085 * gradient\_accumulation\_steps: 32 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07 * lr\_scheduler\_type: inverse\_sqrt * lr\_scheduler\_warmup\_ratio: 0.05 * num\_epochs: 1.0 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.3.0+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 80085\n* gradient\\_accumulation\\_steps: 32\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 1.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #recurrent_gemma #text-generation #generated_from_trainer #en #dataset-BEE-spoke-data/fineweb-1M_en-med #base_model-pszemraj/griffin-1024-llama3t-8layer-simplewiki-silu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 80085\n* gradient\\_accumulation\\_steps: 32\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 1.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Hajas0/hun_emotion_modifier
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:06:58+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me3-seqsight_4096_512_46M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5973 - F1 Score: 0.7040 - Accuracy: 0.7038 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6332 | 0.87 | 200 | 0.6018 | 0.6704 | 0.6704 | | 0.5958 | 1.74 | 400 | 0.5919 | 0.6796 | 0.6796 | | 0.5838 | 2.61 | 600 | 0.5895 | 0.6852 | 0.6853 | | 0.5763 | 3.48 | 800 | 0.5846 | 0.6876 | 0.6875 | | 0.5677 | 4.35 | 1000 | 0.5921 | 0.6824 | 0.6826 | | 0.5652 | 5.22 | 1200 | 0.5783 | 0.6956 | 0.6954 | | 0.5593 | 6.09 | 1400 | 0.5866 | 0.6958 | 0.6978 | | 0.5543 | 6.96 | 1600 | 0.5845 | 0.6953 | 0.6954 | | 0.5483 | 7.83 | 1800 | 0.5852 | 0.6892 | 0.6891 | | 0.5441 | 8.7 | 2000 | 0.5941 | 0.6931 | 0.6929 | | 0.5396 | 9.57 | 2200 | 0.5743 | 0.7011 | 0.7011 | | 0.538 | 10.43 | 2400 | 0.5905 | 0.7028 | 0.7027 | | 0.5338 | 11.3 | 2600 | 0.5764 | 0.6974 | 0.6981 | | 0.5368 | 12.17 | 2800 | 0.5788 | 0.6922 | 0.6924 | | 0.5281 | 13.04 | 3000 | 0.5787 | 0.6911 | 0.6908 | | 0.5243 | 13.91 | 3200 | 0.5804 | 0.7035 | 0.7035 | | 0.52 | 14.78 | 3400 | 0.5841 | 0.6971 | 0.6976 | | 0.5188 | 15.65 | 3600 | 0.5839 | 0.7026 | 0.7024 | | 0.5117 | 16.52 | 3800 | 0.5833 | 0.6984 | 0.6981 | | 0.5123 | 17.39 | 4000 | 0.5941 | 0.6931 | 0.6929 | | 0.5094 | 18.26 | 4200 | 0.6008 | 0.6993 | 0.6995 | | 0.5067 | 19.13 | 4400 | 0.5939 | 0.6957 | 0.6954 | | 0.5021 | 20.0 | 4600 | 0.5888 | 0.6989 | 0.7 | | 0.5014 | 20.87 | 4800 | 0.5931 | 0.7035 | 0.7035 | | 0.4989 | 21.74 | 5000 | 0.5859 | 0.6997 | 0.6995 | | 0.4973 | 22.61 | 5200 | 0.5988 | 0.7046 | 0.7043 | | 0.4939 | 23.48 | 5400 | 0.5977 | 0.7018 | 0.7024 | | 0.4883 | 24.35 | 5600 | 0.5954 | 0.6993 | 0.7003 | | 0.4912 | 25.22 | 5800 | 0.5949 | 0.7028 | 0.7027 | | 0.4846 | 26.09 | 6000 | 0.6026 | 0.7021 | 0.7024 | | 0.4873 | 26.96 | 6200 | 0.6011 | 0.7015 | 0.7027 | | 0.4811 | 27.83 | 6400 | 0.6024 | 0.7019 | 0.7024 | | 0.4842 | 28.7 | 6600 | 0.6047 | 0.7005 | 0.7005 | | 0.4798 | 29.57 | 6800 | 0.5992 | 0.7019 | 0.7019 | | 0.4748 | 30.43 | 7000 | 0.6004 | 0.7039 | 0.7038 | | 0.4818 | 31.3 | 7200 | 0.6029 | 0.7030 | 0.7030 | | 0.4738 | 32.17 | 7400 | 0.6089 | 0.7035 | 0.7033 | | 0.4734 | 33.04 | 7600 | 0.6043 | 0.7049 | 0.7046 | | 0.4724 | 33.91 | 7800 | 0.6026 | 0.7013 | 0.7016 | | 0.4717 | 34.78 | 8000 | 0.6066 | 0.7054 | 0.7052 | | 0.4678 | 35.65 | 8200 | 0.6146 | 0.6989 | 0.6986 | | 0.467 | 36.52 | 8400 | 0.6101 | 0.7035 | 0.7033 | | 0.4675 | 37.39 | 8600 | 0.6093 | 0.7052 | 0.7049 | | 0.4609 | 38.26 | 8800 | 0.6144 | 0.7014 | 0.7016 | | 0.4701 | 39.13 | 9000 | 0.6064 | 0.7044 | 0.7043 | | 0.4623 | 40.0 | 9200 | 0.6104 | 0.7062 | 0.7060 | | 0.4589 | 40.87 | 9400 | 0.6133 | 0.7019 | 0.7016 | | 0.463 | 41.74 | 9600 | 0.6109 | 0.7043 | 0.7041 | | 0.4634 | 42.61 | 9800 | 0.6103 | 0.7032 | 0.7030 | | 0.4577 | 43.48 | 10000 | 0.6116 | 0.7040 | 0.7038 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_4096_512_46M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_4096_512_46M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T21:11:47+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H3K4me3-seqsight\_4096\_512\_46M-L8\_f ================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset. It achieves the following results on the evaluation set: * Loss: 0.5973 * F1 Score: 0.7040 * Accuracy: 0.7038 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me3-seqsight_4096_512_46M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.5650 - F1 Score: 0.7048 - Accuracy: 0.7049 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6432 | 0.87 | 200 | 0.6165 | 0.6638 | 0.6641 | | 0.6108 | 1.74 | 400 | 0.6013 | 0.6774 | 0.6772 | | 0.6 | 2.61 | 600 | 0.5975 | 0.6740 | 0.675 | | 0.5904 | 3.48 | 800 | 0.5925 | 0.6789 | 0.6796 | | 0.5862 | 4.35 | 1000 | 0.5976 | 0.6748 | 0.6772 | | 0.5837 | 5.22 | 1200 | 0.5871 | 0.6850 | 0.6864 | | 0.5789 | 6.09 | 1400 | 0.5926 | 0.6843 | 0.6861 | | 0.5751 | 6.96 | 1600 | 0.5854 | 0.6834 | 0.6832 | | 0.5716 | 7.83 | 1800 | 0.5896 | 0.6761 | 0.6774 | | 0.569 | 8.7 | 2000 | 0.5889 | 0.6859 | 0.6856 | | 0.567 | 9.57 | 2200 | 0.5760 | 0.6869 | 0.6870 | | 0.5665 | 10.43 | 2400 | 0.5823 | 0.6916 | 0.6913 | | 0.5622 | 11.3 | 2600 | 0.5757 | 0.6900 | 0.6897 | | 0.5658 | 12.17 | 2800 | 0.5766 | 0.6880 | 0.6880 | | 0.5611 | 13.04 | 3000 | 0.5799 | 0.6917 | 0.6916 | | 0.5585 | 13.91 | 3200 | 0.5750 | 0.6940 | 0.6937 | | 0.5556 | 14.78 | 3400 | 0.5772 | 0.6939 | 0.6943 | | 0.5572 | 15.65 | 3600 | 0.5763 | 0.6949 | 0.6946 | | 0.5507 | 16.52 | 3800 | 0.5802 | 0.6937 | 0.6935 | | 0.5539 | 17.39 | 4000 | 0.5754 | 0.6975 | 0.6973 | | 0.5526 | 18.26 | 4200 | 0.5799 | 0.6991 | 0.6989 | | 0.5506 | 19.13 | 4400 | 0.5792 | 0.6945 | 0.6943 | | 0.5481 | 20.0 | 4600 | 0.5740 | 0.7030 | 0.7033 | | 0.5481 | 20.87 | 4800 | 0.5770 | 0.7003 | 0.7003 | | 0.5488 | 21.74 | 5000 | 0.5765 | 0.6978 | 0.6976 | | 0.5472 | 22.61 | 5200 | 0.5760 | 0.7022 | 0.7019 | | 0.5451 | 23.48 | 5400 | 0.5786 | 0.6971 | 0.6986 | | 0.5438 | 24.35 | 5600 | 0.5770 | 0.6996 | 0.6997 | | 0.5451 | 25.22 | 5800 | 0.5758 | 0.7026 | 0.7033 | | 0.5398 | 26.09 | 6000 | 0.5825 | 0.6993 | 0.6997 | | 0.5445 | 26.96 | 6200 | 0.5784 | 0.7024 | 0.7033 | | 0.539 | 27.83 | 6400 | 0.5798 | 0.6992 | 0.7 | | 0.5415 | 28.7 | 6600 | 0.5787 | 0.7003 | 0.7 | | 0.5385 | 29.57 | 6800 | 0.5747 | 0.7048 | 0.7046 | | 0.5353 | 30.43 | 7000 | 0.5783 | 0.7036 | 0.7041 | | 0.5421 | 31.3 | 7200 | 0.5766 | 0.7032 | 0.7033 | | 0.5388 | 32.17 | 7400 | 0.5753 | 0.7044 | 0.7043 | | 0.5366 | 33.04 | 7600 | 0.5734 | 0.7035 | 0.7033 | | 0.5372 | 33.91 | 7800 | 0.5777 | 0.7014 | 0.7016 | | 0.5361 | 34.78 | 8000 | 0.5769 | 0.7032 | 0.7030 | | 0.5349 | 35.65 | 8200 | 0.5768 | 0.7032 | 0.7030 | | 0.5339 | 36.52 | 8400 | 0.5764 | 0.7048 | 0.7046 | | 0.5352 | 37.39 | 8600 | 0.5759 | 0.7034 | 0.7033 | | 0.5284 | 38.26 | 8800 | 0.5802 | 0.7026 | 0.7030 | | 0.5395 | 39.13 | 9000 | 0.5747 | 0.7060 | 0.7063 | | 0.5328 | 40.0 | 9200 | 0.5767 | 0.7039 | 0.7038 | | 0.5306 | 40.87 | 9400 | 0.5771 | 0.7043 | 0.7041 | | 0.5328 | 41.74 | 9600 | 0.5774 | 0.7044 | 0.7043 | | 0.5359 | 42.61 | 9800 | 0.5761 | 0.7039 | 0.7038 | | 0.5272 | 43.48 | 10000 | 0.5771 | 0.7048 | 0.7046 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_4096_512_46M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_4096_512_46M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T21:11:47+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H3K4me3-seqsight\_4096\_512\_46M-L1\_f ================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset. It achieves the following results on the evaluation set: * Loss: 0.5650 * F1 Score: 0.7048 * Accuracy: 0.7049 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/AiMavenAi/Herd-1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Herd-1-GGUF/resolve/main/Herd-1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "base_model": "AiMavenAi/Herd-1", "quantized_by": "mradermacher"}
mradermacher/Herd-1-GGUF
null
[ "transformers", "gguf", "en", "base_model:AiMavenAi/Herd-1", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:14:18+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-AiMavenAi/Herd-1 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-AiMavenAi/Herd-1 #endpoints_compatible #region-us \n" ]
null
null
quantized_by: KnightCodin --- ## Exllama v2 Quantizations of winglian/Llama-3-8b-64k-PoSE Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - axolotl --- ## Llama 3 8B 64K https://huggingface.co/winglian/Llama-3-8b-64k-PoSE/tree/main [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <img src="https://huggingface.co/winglian/Llama-3-8b-64k-PoSE/resolve/main/output.png" /> This model uses [PoSE](https://huggingface.co/papers/2309.10400) to extend Llama's context length from 8k to 64k @ rope_theta: 500000.0. We used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens. We have further set rope_theta to 2M after continued pre-training to potentially further extend the context past 64k. This was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. We trained a rank stabilized LoRA of rank 256. [WandB](https://wandb.ai/oaaic/llama-3-64k/runs/tkcyjt37) ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python >>> import transformers >>> import torch >>> model_id = "meta-llama/Meta-Llama-3-8B" >>> pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) >>> pipeline("Hey how are you doing today?") ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
{"language": ["en"], "license": "cc-by-nc-4.0"}
Knightcodin/Llama-3-8b-64k-PoSE-exl2
null
[ "en", "arxiv:2309.10400", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-26T21:15:22+00:00
[ "2309.10400" ]
[ "en" ]
TAGS #en #arxiv-2309.10400 #license-cc-by-nc-4.0 #region-us
quantized\_by: KnightCodin -------------------------- Exllama v2 Quantizations of winglian/Llama-3-8b-64k-PoSE -------------------------------------------------------- Using <a href="URL ExLlamaV2 v0.0.19 for quantization. **The "main" branch only contains the URL, download one of the other branches for the model (see below)** Each branch contains an individual bits per weight, with the main one containing only the URL for further conversions. pipeline\_tag: text-generation tags: * facebook * meta * pytorch * llama * llama-3 * axolotl --- Llama 3 8B 64K URL ------------------ <img src="URL alt="Built with Axolotl" width="200" height="32"/> <img src="URL /> This model uses PoSE to extend Llama's context length from 8k to 64k @ rope\_theta: 500000.0. We used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens. We have further set rope\_theta to 2M after continued pre-training to potentially further extend the context past 64k. This was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. We trained a rank stabilized LoRA of rank 256. WandB Model Details ------------- Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. Model developers Meta Variations Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. Input Models input text only. Output Models generate text and code only. Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. Model Release Date April 18, 2024. Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. License A custom commercial license is available at: URL Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here. Intended Use ------------ Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English. Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. How to use ---------- This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original 'llama3' codebase. ### Use with transformers See the snippet below for usage with Transformers: ### Use with 'llama3' Please, follow the instructions in the repository. To download Original checkpoints, see the example command below leveraging 'huggingface-cli': For Hugging Face support, we recommend using transformers or TGI, but a similar command works. Hardware and Software --------------------- Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. Training Data ------------- Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. Benchmarks ---------- In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here. ### Base pretrained models ### Instruction tuned models ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. Safety For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. Refusals In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL #### Critical risks CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### Cyber Security We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability. ### Child Safety Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository. Finally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community. Ethical Considerations and Limitations -------------------------------------- The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at URL instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {URL } Contributors ------------ Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
[ "### Use with transformers\n\n\nSee the snippet below for usage with Transformers:", "### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository.\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.", "### Base pretrained models", "### Instruction tuned models", "### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.", "#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.", "#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL", "#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).", "### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.", "### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.", "### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos" ]
[ "TAGS\n#en #arxiv-2309.10400 #license-cc-by-nc-4.0 #region-us \n", "### Use with transformers\n\n\nSee the snippet below for usage with Transformers:", "### Use with 'llama3'\n\n\nPlease, follow the instructions in the repository.\n\n\nTo download Original checkpoints, see the example command below leveraging 'huggingface-cli':\n\n\nFor Hugging Face support, we recommend using transformers or TGI, but a similar command works.\n\n\nHardware and Software\n---------------------\n\n\nTraining Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.\n\n\nCarbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.\n\n\n\nCO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.\n\n\nTraining Data\n-------------\n\n\nOverview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.\n\n\nData Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.\n\n\nBenchmarks\n----------\n\n\nIn this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.", "### Base pretrained models", "### Instruction tuned models", "### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.", "#### Llama 3-Instruct\n\n\nAs outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.\n\n\nSafety\n\n\nFor our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.\n\n\nRefusals\n\n\nIn addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.\n\n\nWe built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.", "#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL", "#### Critical risks\n\n\nCBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)\n\n\nWe have conducted a two fold assessment of the safety of the model in this area:\n\n\n* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.\n* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).", "### Cyber Security\n\n\nWe have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of equivalent coding capability.", "### Child Safety\n\n\nChild Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.", "### Community\n\n\nGenerative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our Github repository.\n\n\nFinally, we put in place a set of resources including an output reporting mechanism and bug bounty program to continuously improve the Llama technology with the help of the community.\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\n@article{llama3modelcard,\n\n\ntitle={Llama 3 Model Card},\n\n\nauthor={AI@Meta},\n\n\nyear={2024},\n\n\nurl = {URL\n\n\n}\n\n\nContributors\n------------\n\n\nAaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos" ]
text-generation
mlx
# mlx-community/Meta-Llama-3-8B-Instruct This model was converted to MLX format from [`meta-llama/Meta-Llama-3-8B-Instruct`]() using mlx-lm version **0.12.0**. Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Meta-Llama-3-8B-Instruct") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "mlx"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit", "widget": [{"example_title": "Hello", "messages": [{"role": "user", "content": "Hey my name is Julien! How are you?"}]}, {"example_title": "Winter holidays", "messages": [{"role": "system", "content": "You are a helpful and honest assistant. Please, respond concisely and truthfully."}, {"role": "user", "content": "Can you recommend a good destination for Winter holidays?"}]}, {"example_title": "Programming assistant", "messages": [{"role": "system", "content": "You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully."}, {"role": "user", "content": "Write a function that computes the nth fibonacci number."}]}], "inference": {"parameters": {"max_new_tokens": 300, "stop": ["<|end_of_text|>", "<|eot_id|>"]}}}
mlx-community/Meta-Llama-3-8B-Instruct
null
[ "mlx", "safetensors", "llama", "facebook", "meta", "pytorch", "llama-3", "text-generation", "conversational", "en", "license:other", "region:us" ]
null
2024-04-26T21:19:33+00:00
[]
[ "en" ]
TAGS #mlx #safetensors #llama #facebook #meta #pytorch #llama-3 #text-generation #conversational #en #license-other #region-us
# mlx-community/Meta-Llama-3-8B-Instruct This model was converted to MLX format from ['meta-llama/Meta-Llama-3-8B-Instruct']() using mlx-lm version 0.12.0. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/Meta-Llama-3-8B-Instruct\nThis model was converted to MLX format from ['meta-llama/Meta-Llama-3-8B-Instruct']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #llama #facebook #meta #pytorch #llama-3 #text-generation #conversational #en #license-other #region-us \n", "# mlx-community/Meta-Llama-3-8B-Instruct\nThis model was converted to MLX format from ['meta-llama/Meta-Llama-3-8B-Instruct']() using mlx-lm version 0.12.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text-generation
transformers
# Uploaded model - **Developed by:** jjohnsondev - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
jjohnsondev/Mistral-7B-Summarizer-QLoRA
null
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:24:52+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: jjohnsondev - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: jjohnsondev\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: jjohnsondev\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]}
basakerdogan/Cyber-Jarvis-4Bit
null
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us", "has_space" ]
null
2024-04-26T21:24:59+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #unsloth #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us #has_space
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #unsloth #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us #has_space \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-ar This model is a fine-tuned version of [tner/xlm-roberta-base-panx-dataset-ar](https://huggingface.co/tner/xlm-roberta-base-panx-dataset-ar) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1977 - F1: 0.8803 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2179 | 1.0 | 188 | 0.1977 | 0.8803 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "tner/xlm-roberta-base-panx-dataset-ar", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-ar", "results": []}]}
Awayes/xlm-roberta-base-finetuned-panx-ar
null
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:tner/xlm-roberta-base-panx-dataset-ar", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:24:59+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-tner/xlm-roberta-base-panx-dataset-ar #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-ar ================================== This model is a fine-tuned version of tner/xlm-roberta-base-panx-dataset-ar on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1977 * F1: 0.8803 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-tner/xlm-roberta-base-panx-dataset-ar #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2 This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1](https://huggingface.co/ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1", "model-index": [{"name": "0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2", "results": []}]}
ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T21:25:43+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2 This model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
[ "# 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.IQ3_XS.gguf) | IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.IQ3_M.gguf) | IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q5_K_M.gguf) | Q5_K_M | 50.1 | | | [PART 1](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF/resolve/main/OpenBioLLM-Llama3-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "aaditya/OpenBioLLM-Llama3-70B", "quantized_by": "mradermacher"}
mradermacher/OpenBioLLM-Llama3-70B-GGUF
null
[ "transformers", "gguf", "llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation", "en", "base_model:aaditya/OpenBioLLM-Llama3-70B", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:26:36+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama-3 #llama #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #base_model-aaditya/OpenBioLLM-Llama3-70B #license-llama3 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #llama-3 #llama #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #base_model-aaditya/OpenBioLLM-Llama3-70B #license-llama3 #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
MrezaPRZ/CodeLLama_SFT_FULL
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T21:26:40+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
grahamaco/Mixtral-8x7B-Instruct-v0.1-touch-rugby-rules-adapters
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:27:45+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NHS-BiomedNLP-BiomedBERT-hypop-512 This model is a fine-tuned version of [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3839 - Accuracy: 0.8269 - Precision: 0.8228 - Recall: 0.8237 - F1: 0.8232 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.124 | 1.0 | 397 | 0.4029 | 0.8177 | 0.8146 | 0.8129 | 0.8137 | | 0.0594 | 2.0 | 794 | 0.4561 | 0.8246 | 0.8245 | 0.8161 | 0.8192 | | 0.1105 | 3.0 | 1191 | 0.5390 | 0.8120 | 0.8119 | 0.8028 | 0.8059 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cpu - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall", "f1"], "base_model": "microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract", "model-index": [{"name": "NHS-BiomedNLP-BiomedBERT-hypop-512", "results": []}]}
NIHNCATS/NHS-BiomedNLP-BiomedBERT-hypop-512
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:30:20+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract #license-mit #autotrain_compatible #endpoints_compatible #region-us
NHS-BiomedNLP-BiomedBERT-hypop-512 ================================== This model is a fine-tuned version of microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.3839 * Accuracy: 0.8269 * Precision: 0.8228 * Recall: 0.8237 * F1: 0.8232 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 6 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.2+cpu * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 6", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 6", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.2+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K4me3-seqsight_4096_512_46M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.6169 - F1 Score: 0.7002 - Accuracy: 0.7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6239 | 0.87 | 200 | 0.5985 | 0.6788 | 0.6793 | | 0.5862 | 1.74 | 400 | 0.5935 | 0.6854 | 0.6861 | | 0.5743 | 2.61 | 600 | 0.5918 | 0.6874 | 0.6875 | | 0.5648 | 3.48 | 800 | 0.5885 | 0.6931 | 0.6929 | | 0.5541 | 4.35 | 1000 | 0.6055 | 0.6882 | 0.6880 | | 0.5479 | 5.22 | 1200 | 0.5793 | 0.6969 | 0.6967 | | 0.541 | 6.09 | 1400 | 0.5860 | 0.6984 | 0.6992 | | 0.5342 | 6.96 | 1600 | 0.5830 | 0.7035 | 0.7033 | | 0.5227 | 7.83 | 1800 | 0.5826 | 0.6962 | 0.6959 | | 0.5144 | 8.7 | 2000 | 0.5969 | 0.7025 | 0.7022 | | 0.5064 | 9.57 | 2200 | 0.5766 | 0.7030 | 0.7033 | | 0.5015 | 10.43 | 2400 | 0.6176 | 0.7093 | 0.7092 | | 0.4935 | 11.3 | 2600 | 0.5811 | 0.7026 | 0.7035 | | 0.4908 | 12.17 | 2800 | 0.6091 | 0.6883 | 0.6905 | | 0.4811 | 13.04 | 3000 | 0.5796 | 0.7064 | 0.7063 | | 0.4709 | 13.91 | 3200 | 0.5845 | 0.7144 | 0.7141 | | 0.4587 | 14.78 | 3400 | 0.6026 | 0.7110 | 0.7109 | | 0.4555 | 15.65 | 3600 | 0.6061 | 0.7163 | 0.7163 | | 0.4414 | 16.52 | 3800 | 0.6199 | 0.7123 | 0.7122 | | 0.4388 | 17.39 | 4000 | 0.6460 | 0.7095 | 0.7092 | | 0.4313 | 18.26 | 4200 | 0.6381 | 0.7134 | 0.7133 | | 0.4264 | 19.13 | 4400 | 0.6426 | 0.7141 | 0.7139 | | 0.4191 | 20.0 | 4600 | 0.6407 | 0.7067 | 0.7068 | | 0.4071 | 20.87 | 4800 | 0.6623 | 0.7118 | 0.7117 | | 0.4026 | 21.74 | 5000 | 0.6646 | 0.7055 | 0.7054 | | 0.3947 | 22.61 | 5200 | 0.6809 | 0.7034 | 0.7033 | | 0.3927 | 23.48 | 5400 | 0.6699 | 0.7071 | 0.7068 | | 0.3836 | 24.35 | 5600 | 0.6672 | 0.7075 | 0.7079 | | 0.3777 | 25.22 | 5800 | 0.7169 | 0.7033 | 0.7033 | | 0.3736 | 26.09 | 6000 | 0.7113 | 0.7071 | 0.7068 | | 0.3693 | 26.96 | 6200 | 0.7191 | 0.7098 | 0.7095 | | 0.3574 | 27.83 | 6400 | 0.7157 | 0.7106 | 0.7103 | | 0.358 | 28.7 | 6600 | 0.7305 | 0.6995 | 0.6995 | | 0.354 | 29.57 | 6800 | 0.7093 | 0.7080 | 0.7079 | | 0.3459 | 30.43 | 7000 | 0.7316 | 0.7030 | 0.7027 | | 0.3477 | 31.3 | 7200 | 0.7457 | 0.7046 | 0.7043 | | 0.3398 | 32.17 | 7400 | 0.7478 | 0.7072 | 0.7071 | | 0.3402 | 33.04 | 7600 | 0.7307 | 0.7052 | 0.7049 | | 0.3345 | 33.91 | 7800 | 0.7317 | 0.7090 | 0.7090 | | 0.3319 | 34.78 | 8000 | 0.7630 | 0.7046 | 0.7043 | | 0.3208 | 35.65 | 8200 | 0.7667 | 0.7060 | 0.7057 | | 0.3236 | 36.52 | 8400 | 0.7576 | 0.7063 | 0.7060 | | 0.3226 | 37.39 | 8600 | 0.7906 | 0.7081 | 0.7079 | | 0.3161 | 38.26 | 8800 | 0.7812 | 0.7079 | 0.7076 | | 0.3236 | 39.13 | 9000 | 0.7644 | 0.7073 | 0.7071 | | 0.3129 | 40.0 | 9200 | 0.7809 | 0.7065 | 0.7063 | | 0.3078 | 40.87 | 9400 | 0.7810 | 0.7092 | 0.7090 | | 0.3135 | 41.74 | 9600 | 0.7768 | 0.7106 | 0.7103 | | 0.3145 | 42.61 | 9800 | 0.7797 | 0.7087 | 0.7084 | | 0.307 | 43.48 | 10000 | 0.7809 | 0.7087 | 0.7084 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_4096_512_46M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_4096_512_46M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T21:31:53+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H3K4me3-seqsight\_4096\_512\_46M-L32\_f ================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset. It achieves the following results on the evaluation set: * Loss: 0.6169 * F1 Score: 0.7002 * Accuracy: 0.7 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
# xhluca/Llama-3-8B-Web-Q4_K_M-GGUF This model was converted to GGUF format from [`McGill-NLP/Llama-3-8B-Web`](https://huggingface.co/McGill-NLP/Llama-3-8B-Web) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/McGill-NLP/Llama-3-8B-Web) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo xhluca/Llama-3-8B-Web-Q4_K_M-GGUF --model llama-3-8b-web.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo xhluca/Llama-3-8B-Web-Q4_K_M-GGUF --model llama-3-8b-web.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b-web.Q4_K_M.gguf -n 128 ```
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["agents", "agent", "llm", "llama", "llama-cpp", "gguf-my-repo"], "datasets": ["McGill-NLP/WebLINX"]}
xhluca/Llama-3-8B-Web-Q4_K_M-GGUF
null
[ "transformers", "gguf", "agents", "agent", "llm", "llama", "llama-cpp", "gguf-my-repo", "en", "dataset:McGill-NLP/WebLINX", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:33:44+00:00
[]
[ "en" ]
TAGS #transformers #gguf #agents #agent #llm #llama #llama-cpp #gguf-my-repo #en #dataset-McGill-NLP/WebLINX #license-llama3 #endpoints_compatible #region-us
# xhluca/Llama-3-8B-Web-Q4_K_M-GGUF This model was converted to GGUF format from 'McGill-NLP/Llama-3-8B-Web' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# xhluca/Llama-3-8B-Web-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'McGill-NLP/Llama-3-8B-Web' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #agents #agent #llm #llama #llama-cpp #gguf-my-repo #en #dataset-McGill-NLP/WebLINX #license-llama3 #endpoints_compatible #region-us \n", "# xhluca/Llama-3-8B-Web-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'McGill-NLP/Llama-3-8B-Web' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
# c4ai-command-r-plus - llamafile This repository contains executable weights (which we call [llamafiles](https://github.com/Mozilla-Ocho/llamafile)) that run on Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD for AMD64 and ARM64. - Model creator: [CohereForAI](https://huggingface.co/CohereForAI) - Original model: [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus) ## Quickstart You can run the following command which download, concatenate, and execute the model. ``` wget https://huggingface.co/jartine/c4ai-command-r-plus-llamafile/resolve/main/c4ai-command-r-plus.Q2_K.llamafile chmod +x c4ai-command-r-plus.Q2_K.llamafile ./c4ai-command-r-plus.Q2_K.llamafile --help # view manual ./c4ai-command-r-plus.Q2_K.llamafile # launch web gui + oai api ./c4ai-command-r-plus.Q2_K.llamafile -p ... # cli interface (scriptable) ``` Alternatively, you may download an official `llamafile` executable from Mozilla Ocho on GitHub, in which case you can use the Mixtral llamafiles as a simple weights data file. ``` llamafile -m ./c4ai-command-r-plus.Q2_K.llamafile ... ``` For further information, please see the [llamafile README](https://github.com/mozilla-ocho/llamafile/). Having **trouble?** See the ["Gotchas" section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas) of the README. ## About Upload Limits Files which exceed the Hugging Face 50GB upload limit have a .cat𝑋 extension. You need to use the `cat` command locally to turn them back into a single file, using the same order. ## Prompting Prompt template: ``` <BOS_TOKEN> <|START_OF_TURN_TOKEN|> <|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|> <|START_OF_TURN_TOKEN|> <|CHATBOT_TOKEN|> ``` ## About llamafile llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023. It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp binaries that run on the stock installs of six OSes for both ARM64 and AMD64. In addition to being executables, llamafiles are also zip archives. Each llamafile contains a GGUF file, which you can extract using the `unzip` command. If you want to change or add files to your llamafiles, then the `zipalign` command (distributed on the llamafile github) should be used instead of the traditional `zip` command. ## License The Command-R-Plus license requires: - You can't use these weights for commercial purposes - You have to give Cohere credit if you share or fine tune it - You can't use it for purposes they consider unacceptable, such as spam, misinformation, etc. The license says they can change the definition of acceptable use at will. - The CC-BY-NC 4.0 stipulates no downstream restrictions, so you can't tack on your own list of unacceptable uses too if you create and distribute a fine-tuned version. ## About Quantization Formats (General Advice) Your choice of quantization format depends on three things: 1. Will it fit in RAM or VRAM? 2. Is your use case reading (e.g. summarization) or writing (e.g. chatbot)? 3. llamafiles bigger than 4.30 GB are hard to run on Windows (see [gotchas](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas)) Good quants for writing (prediction speed) are Q5\_K\_M, and Q4\_0. Text generation is bounded by memory speed, so smaller quants help, but they cause the LLM to hallucinate more. However that doesn't mean they can't think correctly. A highly degraded quant like `Q2_K` may not make a great encyclopedia, but it's still capable of logical reasoning and the emergent capabilities LLMs exhibit. Good quants for reading (evaluation speed) are BF16, F16, Q8\_0, and Q4\_0 (ordered from fastest to slowest). Prompt evaluation is bounded by flop count, which means perf can be improved through software engineering alone, e.g. BLAS algorithms, in which case quantization starts hurting more than it helps, since it competes for CPU resources and makes it harder for the compiler to parallelize instructions. You want to ideally use the simplest smallest floating point format that's natively implemented by your hardware. In most cases, that's BF16 or FP16. However, llamafile is able to still offer respectable tinyBLAS speedups for llama.cpp's simplest quants: Q8\_0 and Q4\_0. --- # Model Card for C4AI Command R+ 🚨 **This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit)**. ## Model Summary C4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering. C4AI Command R+ is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is [C4AI Command R](https://huggingface.co/CohereForAI/c4ai-command-r-v01) Developed by: [Cohere](https://cohere.com/) and [Cohere For AI](https://cohere.for.ai) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: c4ai-command-r-plus - Model Size: 104 billion parameters - Context length: 128K **Try C4AI Command R+** You can try out C4AI Command R+ before downloading the weights in our hosted [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus). **Usage** Please install `transformers` from the source repository that includes the necessary changes for this model. ```python # pip install 'git+https://github.com/huggingface/transformers.git' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-plus chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` **Quantized model through bitsandbytes, 8-bit precision** ```python # pip install 'git+https://github.com/huggingface/transformers.git' bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig bnb_config = BitsAndBytesConfig(load_in_8bit=True) model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config) # Format message with the command-r-plus chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` **Quantized model through bitsandbytes, 4-bit precision** This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit). ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. **Languages covered**: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic. Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian. **Context length**: Command R+ supports a context length of 128K. ## Evaluations Command R+ has been submitted to the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We include the results below, along with a direct comparison to the strongest state-of-art open weights models currently available on Hugging Face. We note that these results are only useful to compare when evaluations are implemented for all models in a [standardized way](https://github.com/EleutherAI/lm-evaluation-harness) using publically available code, and hence shouldn't be used for comparison outside of models submitted to the leaderboard or compared to self-reported numbers which can't be replicated in the same way. | Model | Average | Arc (Challenge) | Hella Swag | MMLU | Truthful QA | Winogrande | GSM8k | |:--------------------------------|----------:|------------------:|-------------:|-------:|--------------:|-------------:|--------:| | **CohereForAI/c4ai-command-r-plus** | 74.6 | 70.99 | 88.6 | 75.7 | 56.3 | 85.4 | 70.7 | | [DBRX Instruct](https://huggingface.co/databricks/dbrx-instruct) | 74.5 | 68.9 | 89 | 73.7 | 66.9 | 81.8 | 66.9 | | [Mixtral 8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.7 | 70.1 | 87.6 | 71.4 | 65 | 81.1 | 61.1 | | [Mixtral 8x7B Chat](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.6 | 70.2 | 87.6 | 71.2 | 64.6 | 81.4 | 60.7 | | [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) | 68.5 | 65.5 | 87 | 68.2 | 52.3 | 81.5 | 56.6 | | [Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf) | 67.9 | 67.3 | 87.3 | 69.8 | 44.9 | 83.7 | 54.1 | | [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 65.3 | 65.4 | 84.2 | 74.9 | 55.4 | 80.1 | 31.9 | | [Gemma-7B](https://huggingface.co/google/gemma-7b) | 63.8 | 61.1 | 82.2 | 64.6 | 44.8 | 79 | 50.9 | | [LLama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | 62.4 | 64.6 | 85.9 | 63.9 | 52.8 | 80.5 | 26.7 | | [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 61 | 60 | 83.3 | 64.2 | 42.2 | 78.4 | 37.8 | We include these metrics here because they are frequently requested, but note that these metrics do not capture RAG, multilingual, tooling performance or the evaluation of open ended generations which we believe Command R+ to be state-of-art at. For evaluations of RAG, multilingual and tooling read more [here](https://txt.cohere.com/command-r-plus-microsoft-azure/). For evaluation of open ended generation, Command R+ is currently being evaluated on the [chatbot arena](https://chat.lmsys.org/). ### Tool use & multihop capabilities: Command R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation. Command R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once. The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the `directly_answer` tool, but it can be removed or renamed if required. Comprehensive documentation for working with command R+'s tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). The code snippet below shows a minimal working example on how to render a prompt. <details> <summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use: tools = [ { "name": "internet_search", "description": "Returns a list of relevant document snippets for a textual query retrieved from the internet", "parameter_definitions": { "query": { "description": "Query to search the internet with", "type": 'str', "required": True } } }, { 'name': "directly_answer", "description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history", 'parameter_definitions': {} } ] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_tool_use_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling. ## Available Tools Here is a list of tools that you have available to you: ```python def internet_search(query: str) -> List[Dict]: """Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query (str): Query to search the internet with """ pass ``` ```python def directly_answer() -> List[Dict]: """Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass ```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example: ```json [ { "tool_name": title of the tool in the specification, "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters } ]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary> ```` Action: ```json [ { "tool_name": "internet_search", "parameters": { "query": "biggest penguin in the world" } } ] ``` ```` </details> ### Grounded Generation and RAG Capabilities: Command R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation. Command R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. By default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation. The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens. Comprehensive documentation for working with Command R+'s grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). The code snippet below shows a minimal working example on how to render a prompt. <details> <summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary> ````python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # define documents to ground on: documents = [ { "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."} ] # render the tool use prompt as a string: grounded_generation_prompt = tokenizer.apply_grounded_generation_template( conversation, documents=documents, citation_mode="accurate", # or "fast" tokenize=False, add_generation_prompt=True, ) print(grounded_generation_prompt) ```` </details> <details> <summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary> ````<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results> Document: 0 title: Tall penguins text: Emperor penguins are the tallest growing up to 122 cm in height. Document: 1 title: Penguin habitats text: Emperor penguins only live in Antarctica. </results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line. Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'. Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'. Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup. Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary> ```` Relevant Documents: 0,1 Cited Documents: 0,1 Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres. Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0> ```` </details> ### Code Capabilities: Command R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. ### Model Card Contact For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]). ### Terms of Use: We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try Chat: You can try Command R+ chat in the playground [here](https://dashboard.cohere.com/playground/chat). You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus).
{"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "license": "other", "library_name": "transformers", "tags": ["llamafile"], "base_model": "CohereForAI/c4ai-command-r-plus", "model_creator": "CohereForAI", "quantized_by": "jartine", "license_link": "LICENSE", "pipeline_tag": "text-generation", "prompt_template": "<BOS_TOKEN>\n<|START_OF_TURN_TOKEN|>\n<|USER_TOKEN|>{{prompt}}<|END_OF_TURN_TOKEN|>\n<|START_OF_TURN_TOKEN|>\n<|CHATBOT_TOKEN|>\n"}
jartine/c4ai-command-r-plus-llamafile
null
[ "transformers", "llamafile", "text-generation", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "base_model:CohereForAI/c4ai-command-r-plus", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:34:19+00:00
[]
[ "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar" ]
TAGS #transformers #llamafile #text-generation #en #fr #de #es #it #pt #ja #ko #zh #ar #base_model-CohereForAI/c4ai-command-r-plus #license-other #endpoints_compatible #region-us
c4ai-command-r-plus - llamafile =============================== This repository contains executable weights (which we call llamafiles) that run on Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD for AMD64 and ARM64. * Model creator: CohereForAI * Original model: CohereForAI/c4ai-command-r-plus Quickstart ---------- You can run the following command which download, concatenate, and execute the model. Alternatively, you may download an official 'llamafile' executable from Mozilla Ocho on GitHub, in which case you can use the Mixtral llamafiles as a simple weights data file. For further information, please see the llamafile README. Having trouble? See the "Gotchas" section of the README. About Upload Limits ------------------- Files which exceed the Hugging Face 50GB upload limit have a .cat𝑋 extension. You need to use the 'cat' command locally to turn them back into a single file, using the same order. Prompting --------- Prompt template: About llamafile --------------- llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023. It uses Cosmopolitan Libc to turn LLM weights into runnable URL binaries that run on the stock installs of six OSes for both ARM64 and AMD64. In addition to being executables, llamafiles are also zip archives. Each llamafile contains a GGUF file, which you can extract using the 'unzip' command. If you want to change or add files to your llamafiles, then the 'zipalign' command (distributed on the llamafile github) should be used instead of the traditional 'zip' command. License ------- The Command-R-Plus license requires: * You can't use these weights for commercial purposes * You have to give Cohere credit if you share or fine tune it * You can't use it for purposes they consider unacceptable, such as spam, misinformation, etc. The license says they can change the definition of acceptable use at will. * The CC-BY-NC 4.0 stipulates no downstream restrictions, so you can't tack on your own list of unacceptable uses too if you create and distribute a fine-tuned version. About Quantization Formats (General Advice) ------------------------------------------- Your choice of quantization format depends on three things: 1. Will it fit in RAM or VRAM? 2. Is your use case reading (e.g. summarization) or writing (e.g. chatbot)? 3. llamafiles bigger than 4.30 GB are hard to run on Windows (see gotchas) Good quants for writing (prediction speed) are Q5\_K\_M, and Q4\_0. Text generation is bounded by memory speed, so smaller quants help, but they cause the LLM to hallucinate more. However that doesn't mean they can't think correctly. A highly degraded quant like 'Q2\_K' may not make a great encyclopedia, but it's still capable of logical reasoning and the emergent capabilities LLMs exhibit. Good quants for reading (evaluation speed) are BF16, F16, Q8\_0, and Q4\_0 (ordered from fastest to slowest). Prompt evaluation is bounded by flop count, which means perf can be improved through software engineering alone, e.g. BLAS algorithms, in which case quantization starts hurting more than it helps, since it competes for CPU resources and makes it harder for the compiler to parallelize instructions. You want to ideally use the simplest smallest floating point format that's natively implemented by your hardware. In most cases, that's BF16 or FP16. However, llamafile is able to still offer respectable tinyBLAS speedups for URL's simplest quants: Q8\_0 and Q4\_0. --- Model Card for C4AI Command R+ ============================== This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes here. Model Summary ------------- C4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering. C4AI Command R+ is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is C4AI Command R Developed by: Cohere and Cohere For AI * Point of Contact: Cohere For AI: URL * License: CC-BY-NC, requires also adhering to C4AI's Acceptable Use Policy * Model: c4ai-command-r-plus * Model Size: 104 billion parameters * Context length: 128K Try C4AI Command R+ You can try out C4AI Command R+ before downloading the weights in our hosted Hugging Face Space. Usage Please install 'transformers' from the source repository that includes the necessary changes for this model. Quantized model through bitsandbytes, 8-bit precision Quantized model through bitsandbytes, 4-bit precision This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes here. Model Details ------------- Input: Models input text only. Output: Models generate text only. Model Architecture: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. Languages covered: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic. Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian. Context length: Command R+ supports a context length of 128K. Evaluations ----------- Command R+ has been submitted to the Open LLM leaderboard. We include the results below, along with a direct comparison to the strongest state-of-art open weights models currently available on Hugging Face. We note that these results are only useful to compare when evaluations are implemented for all models in a standardized way using publically available code, and hence shouldn't be used for comparison outside of models submitted to the leaderboard or compared to self-reported numbers which can't be replicated in the same way. We include these metrics here because they are frequently requested, but note that these metrics do not capture RAG, multilingual, tooling performance or the evaluation of open ended generations which we believe Command R+ to be state-of-art at. For evaluations of RAG, multilingual and tooling read more here. For evaluation of open ended generation, Command R+ is currently being evaluated on the chatbot arena. ### Tool use & multihop capabilities: Command R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation. Command R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once. The model has been trained to recognise a special 'directly\_answer' tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the 'directly\_answer' tool, but it can be removed or renamed if required. Comprehensive documentation for working with command R+'s tool use prompt template can be found here. The code snippet below shows a minimal working example on how to render a prompt. **Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]** **Example Rendered Tool Use Prompt [CLICK TO EXPAND]** python def internet\_search(query: str) -> List[Dict]: """Returns a list of relevant document snippets for a textual query retrieved from the internet ``` Args: query (str): Query to search the internet with """ pass ``` python def directly\_answer() -> List[Dict]: """Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass json [ { "tool\_name": title of the tool in the specification, "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters } ]' **Example Rendered Tool Use Completion [CLICK TO EXPAND]** json [ { "tool\_name": "internet\_search", "parameters": { "query": "biggest penguin in the world" } } ] ' ### Grounded Generation and RAG Capabilities: Command R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation. Command R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. By default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as 'accurate' grounded generation. The model is trained with a number of other answering modes, which can be selected by prompt changes. A 'fast' citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens. Comprehensive documentation for working with Command R+'s grounded generation prompt template can be found here. The code snippet below shows a minimal working example on how to render a prompt. **Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]** ' **Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]** ' **Example Rendered Grounded Generation Completion [CLICK TO EXPAND]** ' ### Code Capabilities: Command R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. ### Model Card Contact For errors or additional questions about details in this model card, contact info@URL. ### Terms of Use: We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a CC-BY-NC License with an acceptable use addendum, and also requires adhering to C4AI's Acceptable Use Policy. ### Try Chat: You can try Command R+ chat in the playground here. You can also use it in our dedicated Hugging Face Space here.
[ "### Tool use & multihop capabilities:\n\n\nCommand R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation.\n\n\nCommand R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once.\n\n\nThe model has been trained to recognise a special 'directly\\_answer' tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions.\nWe recommend including the 'directly\\_answer' tool, but it can be removed or renamed if required.\n\n\nComprehensive documentation for working with command R+'s tool use prompt template can be found here.\n\n\nThe code snippet below shows a minimal working example on how to render a prompt.\n\n\n\n**Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]** \n\n\n**Example Rendered Tool Use Prompt [CLICK TO EXPAND]**\npython\ndef internet\\_search(query: str) -> List[Dict]:\n\"\"\"Returns a list of relevant document snippets for a textual query retrieved from the internet\n\n\n\n```\nArgs:\n query (str): Query to search the internet with\n\"\"\"\npass\n\n```\n\npython\ndef directly\\_answer() -> List[Dict]:\n\"\"\"Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history\n\"\"\"\npass\njson\n[\n{\n\"tool\\_name\": title of the tool in the specification,\n\"parameters\": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters\n}\n]'\n\n\n\n\n**Example Rendered Tool Use Completion [CLICK TO EXPAND]**\njson\n[\n{\n\"tool\\_name\": \"internet\\_search\",\n\"parameters\": {\n\"query\": \"biggest penguin in the world\"\n}\n}\n]\n'", "### Grounded Generation and RAG Capabilities:\n\n\nCommand R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation.\n\n\nCommand R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured.\n\n\nBy default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as 'accurate' grounded generation.\n\n\nThe model is trained with a number of other answering modes, which can be selected by prompt changes. A 'fast' citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens.\n\n\nComprehensive documentation for working with Command R+'s grounded generation prompt template can be found here.\n\n\nThe code snippet below shows a minimal working example on how to render a prompt.\n\n\n\n **Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]** \n'\n\n\n\n\n**Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]**\n'\n\n\n\n\n**Example Rendered Grounded Generation Completion [CLICK TO EXPAND]**\n'", "### Code Capabilities:\n\n\nCommand R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.", "### Model Card Contact\n\n\nFor errors or additional questions about details in this model card, contact info@URL.", "### Terms of Use:\n\n\nWe hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a CC-BY-NC License with an acceptable use addendum, and also requires adhering to C4AI's Acceptable Use Policy.", "### Try Chat:\n\n\nYou can try Command R+ chat in the playground here. You can also use it in our dedicated Hugging Face Space here." ]
[ "TAGS\n#transformers #llamafile #text-generation #en #fr #de #es #it #pt #ja #ko #zh #ar #base_model-CohereForAI/c4ai-command-r-plus #license-other #endpoints_compatible #region-us \n", "### Tool use & multihop capabilities:\n\n\nCommand R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation.\n\n\nCommand R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once.\n\n\nThe model has been trained to recognise a special 'directly\\_answer' tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions.\nWe recommend including the 'directly\\_answer' tool, but it can be removed or renamed if required.\n\n\nComprehensive documentation for working with command R+'s tool use prompt template can be found here.\n\n\nThe code snippet below shows a minimal working example on how to render a prompt.\n\n\n\n**Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]** \n\n\n**Example Rendered Tool Use Prompt [CLICK TO EXPAND]**\npython\ndef internet\\_search(query: str) -> List[Dict]:\n\"\"\"Returns a list of relevant document snippets for a textual query retrieved from the internet\n\n\n\n```\nArgs:\n query (str): Query to search the internet with\n\"\"\"\npass\n\n```\n\npython\ndef directly\\_answer() -> List[Dict]:\n\"\"\"Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history\n\"\"\"\npass\njson\n[\n{\n\"tool\\_name\": title of the tool in the specification,\n\"parameters\": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters\n}\n]'\n\n\n\n\n**Example Rendered Tool Use Completion [CLICK TO EXPAND]**\njson\n[\n{\n\"tool\\_name\": \"internet\\_search\",\n\"parameters\": {\n\"query\": \"biggest penguin in the world\"\n}\n}\n]\n'", "### Grounded Generation and RAG Capabilities:\n\n\nCommand R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation.\n\n\nCommand R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured.\n\n\nBy default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as 'accurate' grounded generation.\n\n\nThe model is trained with a number of other answering modes, which can be selected by prompt changes. A 'fast' citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens.\n\n\nComprehensive documentation for working with Command R+'s grounded generation prompt template can be found here.\n\n\nThe code snippet below shows a minimal working example on how to render a prompt.\n\n\n\n **Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]** \n'\n\n\n\n\n**Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]**\n'\n\n\n\n\n**Example Rendered Grounded Generation Completion [CLICK TO EXPAND]**\n'", "### Code Capabilities:\n\n\nCommand R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.", "### Model Card Contact\n\n\nFor errors or additional questions about details in this model card, contact info@URL.", "### Terms of Use:\n\n\nWe hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a CC-BY-NC License with an acceptable use addendum, and also requires adhering to C4AI's Acceptable Use Policy.", "### Try Chat:\n\n\nYou can try Command R+ chat in the playground here. You can also use it in our dedicated Hugging Face Space here." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4-seqsight_4096_512_46M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset. It achieves the following results on the evaluation set: - Loss: 0.2585 - F1 Score: 0.9143 - Accuracy: 0.9144 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.309 | 2.17 | 200 | 0.2690 | 0.8966 | 0.8966 | | 0.2557 | 4.35 | 400 | 0.2645 | 0.9009 | 0.9008 | | 0.2401 | 6.52 | 600 | 0.2567 | 0.9006 | 0.9008 | | 0.2308 | 8.7 | 800 | 0.2602 | 0.9017 | 0.9014 | | 0.2178 | 10.87 | 1000 | 0.2584 | 0.9025 | 0.9028 | | 0.2115 | 13.04 | 1200 | 0.2571 | 0.9068 | 0.9069 | | 0.2007 | 15.22 | 1400 | 0.2609 | 0.9057 | 0.9055 | | 0.194 | 17.39 | 1600 | 0.2666 | 0.9071 | 0.9069 | | 0.1873 | 19.57 | 1800 | 0.2715 | 0.9082 | 0.9083 | | 0.1768 | 21.74 | 2000 | 0.2787 | 0.9036 | 0.9035 | | 0.1685 | 23.91 | 2200 | 0.2918 | 0.9007 | 0.9008 | | 0.1646 | 26.09 | 2400 | 0.2784 | 0.9068 | 0.9069 | | 0.1569 | 28.26 | 2600 | 0.2988 | 0.9047 | 0.9049 | | 0.1472 | 30.43 | 2800 | 0.2988 | 0.8915 | 0.8912 | | 0.144 | 32.61 | 3000 | 0.3173 | 0.9027 | 0.9028 | | 0.1345 | 34.78 | 3200 | 0.3016 | 0.8959 | 0.8960 | | 0.1315 | 36.96 | 3400 | 0.3170 | 0.8967 | 0.8966 | | 0.1257 | 39.13 | 3600 | 0.3426 | 0.8923 | 0.8925 | | 0.1193 | 41.3 | 3800 | 0.3451 | 0.8930 | 0.8932 | | 0.1119 | 43.48 | 4000 | 0.3724 | 0.8905 | 0.8905 | | 0.1104 | 45.65 | 4200 | 0.3722 | 0.8902 | 0.8905 | | 0.1027 | 47.83 | 4400 | 0.3907 | 0.8893 | 0.8891 | | 0.103 | 50.0 | 4600 | 0.3820 | 0.8987 | 0.8987 | | 0.0957 | 52.17 | 4800 | 0.4251 | 0.8914 | 0.8912 | | 0.0948 | 54.35 | 5000 | 0.4199 | 0.8921 | 0.8919 | | 0.0901 | 56.52 | 5200 | 0.4169 | 0.8915 | 0.8912 | | 0.0871 | 58.7 | 5400 | 0.4306 | 0.8877 | 0.8877 | | 0.082 | 60.87 | 5600 | 0.4256 | 0.8883 | 0.8884 | | 0.0821 | 63.04 | 5800 | 0.4689 | 0.8886 | 0.8884 | | 0.0747 | 65.22 | 6000 | 0.4801 | 0.8958 | 0.8960 | | 0.0778 | 67.39 | 6200 | 0.4491 | 0.8927 | 0.8925 | | 0.0709 | 69.57 | 6400 | 0.4827 | 0.8866 | 0.8864 | | 0.073 | 71.74 | 6600 | 0.4888 | 0.8871 | 0.8871 | | 0.0674 | 73.91 | 6800 | 0.5019 | 0.8892 | 0.8891 | | 0.0655 | 76.09 | 7000 | 0.5133 | 0.8907 | 0.8905 | | 0.0675 | 78.26 | 7200 | 0.4999 | 0.8883 | 0.8884 | | 0.0646 | 80.43 | 7400 | 0.5117 | 0.8893 | 0.8891 | | 0.0635 | 82.61 | 7600 | 0.5107 | 0.8898 | 0.8898 | | 0.0592 | 84.78 | 7800 | 0.5339 | 0.8906 | 0.8905 | | 0.0566 | 86.96 | 8000 | 0.5493 | 0.8879 | 0.8877 | | 0.0602 | 89.13 | 8200 | 0.5342 | 0.8831 | 0.8830 | | 0.0592 | 91.3 | 8400 | 0.5491 | 0.8912 | 0.8912 | | 0.0539 | 93.48 | 8600 | 0.5585 | 0.8884 | 0.8884 | | 0.0559 | 95.65 | 8800 | 0.5411 | 0.8919 | 0.8919 | | 0.0534 | 97.83 | 9000 | 0.5574 | 0.8906 | 0.8905 | | 0.0547 | 100.0 | 9200 | 0.5596 | 0.8865 | 0.8864 | | 0.0502 | 102.17 | 9400 | 0.5609 | 0.8885 | 0.8884 | | 0.0563 | 104.35 | 9600 | 0.5453 | 0.8871 | 0.8871 | | 0.0527 | 106.52 | 9800 | 0.5504 | 0.8884 | 0.8884 | | 0.052 | 108.7 | 10000 | 0.5529 | 0.8871 | 0.8871 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H4-seqsight_4096_512_46M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4-seqsight_4096_512_46M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T21:34:30+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H4-seqsight\_4096\_512\_46M-L8\_f =========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset. It achieves the following results on the evaluation set: * Loss: 0.2585 * F1 Score: 0.9143 * Accuracy: 0.9144 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4-seqsight_4096_512_46M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset. It achieves the following results on the evaluation set: - Loss: 0.2522 - F1 Score: 0.9096 - Accuracy: 0.9097 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3211 | 2.17 | 200 | 0.2793 | 0.8947 | 0.8946 | | 0.2662 | 4.35 | 400 | 0.2867 | 0.8956 | 0.8953 | | 0.2559 | 6.52 | 600 | 0.2689 | 0.8934 | 0.8932 | | 0.2549 | 8.7 | 800 | 0.2676 | 0.8982 | 0.8980 | | 0.244 | 10.87 | 1000 | 0.2582 | 0.9012 | 0.9014 | | 0.2409 | 13.04 | 1200 | 0.2555 | 0.9007 | 0.9008 | | 0.2348 | 15.22 | 1400 | 0.2508 | 0.9055 | 0.9055 | | 0.2298 | 17.39 | 1600 | 0.2531 | 0.9077 | 0.9076 | | 0.2269 | 19.57 | 1800 | 0.2567 | 0.8997 | 0.9001 | | 0.222 | 21.74 | 2000 | 0.2597 | 0.9022 | 0.9021 | | 0.2159 | 23.91 | 2200 | 0.2554 | 0.9055 | 0.9055 | | 0.2145 | 26.09 | 2400 | 0.2550 | 0.9077 | 0.9076 | | 0.2127 | 28.26 | 2600 | 0.2576 | 0.9047 | 0.9049 | | 0.2094 | 30.43 | 2800 | 0.2528 | 0.9069 | 0.9069 | | 0.2051 | 32.61 | 3000 | 0.2605 | 0.9046 | 0.9049 | | 0.2007 | 34.78 | 3200 | 0.2592 | 0.9067 | 0.9069 | | 0.2018 | 36.96 | 3400 | 0.2576 | 0.9074 | 0.9076 | | 0.198 | 39.13 | 3600 | 0.2567 | 0.9060 | 0.9062 | | 0.1945 | 41.3 | 3800 | 0.2638 | 0.9031 | 0.9035 | | 0.1894 | 43.48 | 4000 | 0.2697 | 0.9032 | 0.9035 | | 0.1971 | 45.65 | 4200 | 0.2644 | 0.9066 | 0.9069 | | 0.1878 | 47.83 | 4400 | 0.2695 | 0.9060 | 0.9062 | | 0.1864 | 50.0 | 4600 | 0.2698 | 0.9025 | 0.9028 | | 0.1834 | 52.17 | 4800 | 0.2733 | 0.9026 | 0.9028 | | 0.1849 | 54.35 | 5000 | 0.2687 | 0.9068 | 0.9069 | | 0.1794 | 56.52 | 5200 | 0.2728 | 0.9049 | 0.9049 | | 0.1778 | 58.7 | 5400 | 0.2762 | 0.9039 | 0.9042 | | 0.174 | 60.87 | 5600 | 0.2727 | 0.9034 | 0.9035 | | 0.1764 | 63.04 | 5800 | 0.2764 | 0.9028 | 0.9028 | | 0.1712 | 65.22 | 6000 | 0.2843 | 0.9005 | 0.9008 | | 0.1732 | 67.39 | 6200 | 0.2781 | 0.9021 | 0.9021 | | 0.1687 | 69.57 | 6400 | 0.2778 | 0.9041 | 0.9042 | | 0.1709 | 71.74 | 6600 | 0.2827 | 0.9048 | 0.9049 | | 0.1661 | 73.91 | 6800 | 0.2840 | 0.9013 | 0.9014 | | 0.1641 | 76.09 | 7000 | 0.2825 | 0.9028 | 0.9028 | | 0.1663 | 78.26 | 7200 | 0.2867 | 0.8986 | 0.8987 | | 0.162 | 80.43 | 7400 | 0.2853 | 0.9013 | 0.9014 | | 0.1624 | 82.61 | 7600 | 0.2917 | 0.8957 | 0.8960 | | 0.1628 | 84.78 | 7800 | 0.2895 | 0.8986 | 0.8987 | | 0.161 | 86.96 | 8000 | 0.2899 | 0.8965 | 0.8966 | | 0.1611 | 89.13 | 8200 | 0.2888 | 0.8972 | 0.8973 | | 0.1597 | 91.3 | 8400 | 0.2939 | 0.8965 | 0.8966 | | 0.1551 | 93.48 | 8600 | 0.3008 | 0.8943 | 0.8946 | | 0.1581 | 95.65 | 8800 | 0.2983 | 0.8937 | 0.8939 | | 0.156 | 97.83 | 9000 | 0.2947 | 0.8965 | 0.8966 | | 0.1558 | 100.0 | 9200 | 0.2942 | 0.8965 | 0.8966 | | 0.1559 | 102.17 | 9400 | 0.2962 | 0.8958 | 0.8960 | | 0.1571 | 104.35 | 9600 | 0.2950 | 0.8958 | 0.8960 | | 0.1553 | 106.52 | 9800 | 0.2972 | 0.8972 | 0.8973 | | 0.1522 | 108.7 | 10000 | 0.2964 | 0.8965 | 0.8966 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H4-seqsight_4096_512_46M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4-seqsight_4096_512_46M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T21:35:04+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H4-seqsight\_4096\_512\_46M-L1\_f =========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset. It achieves the following results on the evaluation set: * Loss: 0.2522 * F1 Score: 0.9096 * Accuracy: 0.9097 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
rubbrband/sdxl10ArienmixxlAsian_v45Pruned
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-04-26T21:35:56+00:00
[ "1910.09700" ]
[]
TAGS #diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-turkish-300m-8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2539 - Wer: 0.1949 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 0.1 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:-----:|:---------------:|:------:| | 3.4377 | 0.1724 | 500 | 0.7853 | 0.6509 | | 0.7415 | 0.3447 | 1000 | 0.4365 | 0.4855 | | 0.472 | 0.5171 | 1500 | 0.3851 | 0.4410 | | 0.3678 | 0.6894 | 2000 | 0.3468 | 0.4292 | | 0.3512 | 0.8618 | 2500 | 0.3287 | 0.4139 | | 0.3345 | 1.0341 | 3000 | 0.3030 | 0.3810 | | 0.2976 | 1.2065 | 3500 | 0.3085 | 0.3702 | | 0.2841 | 1.3788 | 4000 | 0.3024 | 0.3964 | | 0.2674 | 1.5512 | 4500 | 0.2864 | 0.3471 | | 0.2693 | 1.7235 | 5000 | 0.2664 | 0.3411 | | 0.2564 | 1.8959 | 5500 | 0.2700 | 0.3399 | | 0.2407 | 2.0683 | 6000 | 0.2649 | 0.3284 | | 0.2225 | 2.2406 | 6500 | 0.2619 | 0.3243 | | 0.2209 | 2.4130 | 7000 | 0.2634 | 0.3154 | | 0.2221 | 2.5853 | 7500 | 0.2700 | 0.3250 | | 0.2104 | 2.7577 | 8000 | 0.2576 | 0.3115 | | 0.2095 | 2.9300 | 8500 | 0.2522 | 0.3123 | | 0.2031 | 3.1024 | 9000 | 0.2453 | 0.2954 | | 0.1849 | 3.2747 | 9500 | 0.2483 | 0.2949 | | 0.1911 | 3.4471 | 10000 | 0.2454 | 0.2984 | | 0.1784 | 3.6194 | 10500 | 0.2619 | 0.2956 | | 0.1891 | 3.7918 | 11000 | 0.2520 | 0.2870 | | 0.1822 | 3.9642 | 11500 | 0.2456 | 0.2945 | | 0.1633 | 4.1365 | 12000 | 0.2473 | 0.2905 | | 0.1594 | 4.3089 | 12500 | 0.2413 | 0.2863 | | 0.1616 | 4.4812 | 13000 | 0.2499 | 0.2852 | | 0.1633 | 4.6536 | 13500 | 0.2414 | 0.2844 | | 0.1652 | 4.8259 | 14000 | 0.2330 | 0.2894 | | 0.1659 | 4.9983 | 14500 | 0.2339 | 0.2703 | | 0.1496 | 5.1706 | 15000 | 0.2405 | 0.2832 | | 0.1468 | 5.3430 | 15500 | 0.2378 | 0.2731 | | 0.1435 | 5.5153 | 16000 | 0.2328 | 0.2679 | | 0.1386 | 5.6877 | 16500 | 0.2332 | 0.2715 | | 0.1422 | 5.8600 | 17000 | 0.2328 | 0.2683 | | 0.1429 | 6.0324 | 17500 | 0.2500 | 0.2715 | | 0.1271 | 6.2048 | 18000 | 0.2447 | 0.2635 | | 0.1374 | 6.3771 | 18500 | 0.2412 | 0.2679 | | 0.1306 | 6.5495 | 19000 | 0.2403 | 0.2604 | | 0.1287 | 6.7218 | 19500 | 0.2319 | 0.2541 | | 0.131 | 6.8942 | 20000 | 0.2407 | 0.2600 | | 0.1261 | 7.0665 | 20500 | 0.2335 | 0.2547 | | 0.1202 | 7.2389 | 21000 | 0.2321 | 0.2509 | | 0.1194 | 7.4112 | 21500 | 0.2380 | 0.2546 | | 0.1216 | 7.5836 | 22000 | 0.2515 | 0.2560 | | 0.1139 | 7.7559 | 22500 | 0.2295 | 0.2502 | | 0.1159 | 7.9283 | 23000 | 0.2291 | 0.2529 | | 0.1145 | 8.1007 | 23500 | 0.2471 | 0.2507 | | 0.1072 | 8.2730 | 24000 | 0.2327 | 0.2456 | | 0.1106 | 8.4454 | 24500 | 0.2243 | 0.2461 | | 0.1069 | 8.6177 | 25000 | 0.2305 | 0.2456 | | 0.1116 | 8.7901 | 25500 | 0.2397 | 0.2486 | | 0.1079 | 8.9624 | 26000 | 0.2417 | 0.2528 | | 0.094 | 9.1348 | 26500 | 0.2484 | 0.2442 | | 0.0954 | 9.3071 | 27000 | 0.2385 | 0.2477 | | 0.0981 | 9.4795 | 27500 | 0.2526 | 0.2516 | | 0.1037 | 9.6518 | 28000 | 0.2346 | 0.2391 | | 0.0934 | 9.8242 | 28500 | 0.2342 | 0.2414 | | 0.0968 | 9.9966 | 29000 | 0.2385 | 0.2387 | | 0.0954 | 10.1689 | 29500 | 0.2367 | 0.2389 | | 0.0903 | 10.3413 | 30000 | 0.2346 | 0.2365 | | 0.0931 | 10.5136 | 30500 | 0.2472 | 0.2385 | | 0.0911 | 10.6860 | 31000 | 0.2562 | 0.2368 | | 0.0902 | 10.8583 | 31500 | 0.2375 | 0.2390 | | 0.0831 | 11.0307 | 32000 | 0.2265 | 0.2326 | | 0.0822 | 11.2030 | 32500 | 0.2464 | 0.2305 | | 0.083 | 11.3754 | 33000 | 0.2361 | 0.2299 | | 0.0802 | 11.5477 | 33500 | 0.2440 | 0.2389 | | 0.0757 | 11.7201 | 34000 | 0.2435 | 0.2261 | | 0.0781 | 11.8925 | 34500 | 0.2410 | 0.2293 | | 0.0823 | 12.0648 | 35000 | 0.2551 | 0.2423 | | 0.0748 | 12.2372 | 35500 | 0.2448 | 0.2245 | | 0.0724 | 12.4095 | 36000 | 0.2369 | 0.2208 | | 0.0716 | 12.5819 | 36500 | 0.2462 | 0.2280 | | 0.0734 | 12.7542 | 37000 | 0.2407 | 0.2255 | | 0.0771 | 12.9266 | 37500 | 0.2461 | 0.2304 | | 0.0715 | 13.0989 | 38000 | 0.2496 | 0.2237 | | 0.0702 | 13.2713 | 38500 | 0.2515 | 0.2228 | | 0.0697 | 13.4436 | 39000 | 0.2377 | 0.2217 | | 0.0712 | 13.6160 | 39500 | 0.2446 | 0.2182 | | 0.0641 | 13.7883 | 40000 | 0.2461 | 0.2187 | | 0.0712 | 13.9607 | 40500 | 0.2534 | 0.2155 | | 0.0644 | 14.1331 | 41000 | 0.2428 | 0.2140 | | 0.0584 | 14.3054 | 41500 | 0.2595 | 0.2156 | | 0.0621 | 14.4778 | 42000 | 0.2474 | 0.2139 | | 0.0634 | 14.6501 | 42500 | 0.2571 | 0.2184 | | 0.0643 | 14.8225 | 43000 | 0.2556 | 0.2180 | | 0.0599 | 14.9948 | 43500 | 0.2532 | 0.2160 | | 0.06 | 15.1672 | 44000 | 0.2468 | 0.2182 | | 0.0555 | 15.3395 | 44500 | 0.2530 | 0.2152 | | 0.0542 | 15.5119 | 45000 | 0.2530 | 0.2080 | | 0.0533 | 15.6842 | 45500 | 0.2414 | 0.2111 | | 0.0587 | 15.8566 | 46000 | 0.2457 | 0.2081 | | 0.0556 | 16.0290 | 46500 | 0.2509 | 0.2085 | | 0.0538 | 16.2013 | 47000 | 0.2500 | 0.2067 | | 0.052 | 16.3737 | 47500 | 0.2472 | 0.2076 | | 0.0504 | 16.5460 | 48000 | 0.2537 | 0.2080 | | 0.0562 | 16.7184 | 48500 | 0.2512 | 0.2047 | | 0.0487 | 16.8907 | 49000 | 0.2604 | 0.2058 | | 0.0526 | 17.0631 | 49500 | 0.2530 | 0.2064 | | 0.0457 | 17.2354 | 50000 | 0.2531 | 0.2034 | | 0.0483 | 17.4078 | 50500 | 0.2532 | 0.2032 | | 0.0456 | 17.5801 | 51000 | 0.2585 | 0.2040 | | 0.0507 | 17.7525 | 51500 | 0.2550 | 0.2025 | | 0.0471 | 17.9249 | 52000 | 0.2439 | 0.2003 | | 0.0485 | 18.0972 | 52500 | 0.2517 | 0.1989 | | 0.0472 | 18.2696 | 53000 | 0.2540 | 0.2007 | | 0.0472 | 18.4419 | 53500 | 0.2595 | 0.2016 | | 0.0464 | 18.6143 | 54000 | 0.2491 | 0.1987 | | 0.0436 | 18.7866 | 54500 | 0.2581 | 0.1988 | | 0.0443 | 18.9590 | 55000 | 0.2530 | 0.1978 | | 0.0454 | 19.1313 | 55500 | 0.2525 | 0.1967 | | 0.039 | 19.3037 | 56000 | 0.2537 | 0.1956 | | 0.0432 | 19.4760 | 56500 | 0.2571 | 0.1975 | | 0.0431 | 19.6484 | 57000 | 0.2543 | 0.1964 | | 0.0449 | 19.8208 | 57500 | 0.2543 | 0.1950 | | 0.0407 | 19.9931 | 58000 | 0.2539 | 0.1949 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.17.1 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_17_0"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-turkish-300m-8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "common_voice_17_0", "type": "common_voice_17_0", "config": "tr", "split": "test", "args": "tr"}, "metrics": [{"type": "wer", "value": 0.19493994377715307, "name": "Wer"}]}]}]}
tgrhn/wav2vec2-turkish-300m-8
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:38:23+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_17_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-turkish-300m-8 ======================= This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice\_17\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.2539 * Wer: 0.1949 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 0.1 * num\_epochs: 20 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.2+cu121 * Datasets 2.17.1 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 0.1\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.17.1\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_17_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 0.1\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.17.1\n* Tokenizers 0.19.1" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-fa This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 1.8445 - Wer: 91.9255 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.8399 | 1.3158 | 25 | 1.9075 | 88.8199 | | 0.5744 | 2.6316 | 50 | 1.8445 | 91.9255 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "whisper-small-fa", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "common_voice_11_0", "type": "common_voice_11_0", "config": "fa", "split": "None", "args": "fa"}, "metrics": [{"type": "wer", "value": 91.92546583850931, "name": "Wer"}]}]}]}
MohammadPourbahram/whisper-small-fa
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_11_0", "base_model:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:38:55+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
whisper-small-fa ================ This model is a fine-tuned version of openai/whisper-small on the common\_voice\_11\_0 dataset. It achieves the following results on the evaluation set: * Loss: 1.8445 * Wer: 91.9255 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 10 * training\_steps: 50 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 10\n* training\\_steps: 50\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 10\n* training\\_steps: 50\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base. ### Models Merged The following models were included in the merge: * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml --- models: - model: NousResearch/Meta-Llama-3-8B parameters: weight: 0.5 - model: NousResearch/Meta-Llama-3-8B-Instruct parameters: weight: 0.5 merge_method: task_arithmetic base_model: NousResearch/Meta-Llama-3-8B dtype: bfloat16 tokenizer_source: union ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["NousResearch/Meta-Llama-3-8B", "NousResearch/Meta-Llama-3-8B-Instruct"]}
kotyKD/Llama-3-Base-Instruct-variation1
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:NousResearch/Meta-Llama-3-8B", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T21:39:07+00:00
[ "2212.04089" ]
[]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2212.04089 #base_model-NousResearch/Meta-Llama-3-8B #base_model-NousResearch/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the task arithmetic merge method using NousResearch/Meta-Llama-3-8B as a base. ### Models Merged The following models were included in the merge: * NousResearch/Meta-Llama-3-8B-Instruct ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the task arithmetic merge method using NousResearch/Meta-Llama-3-8B as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Meta-Llama-3-8B-Instruct", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2212.04089 #base_model-NousResearch/Meta-Llama-3-8B #base_model-NousResearch/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the task arithmetic merge method using NousResearch/Meta-Llama-3-8B as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Meta-Llama-3-8B-Instruct", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2 This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1](https://huggingface.co/ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2", "results": []}]}
ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T21:39:34+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2 This model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
[ "# 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1" ]
text-generation
transformers
# Uploaded model - **Developed by:** yeilho - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
yeilho/llama-3-8b-Instruct-bnb-4bit-medical
null
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:41:44+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: yeilho - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: yeilho\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #pytorch #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: yeilho\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
* <span style="color:orange">I'm just tinkering. All credit to the original creator: [Undi](https://huggingface.co/Undi95).</span> * <span style="color:orange">"rpcal" designates that this model was quantized using an [RP-specific data set](https://huggingface.co/datasets/royallab/PIPPA-cleaned) instead of the generalized wiki or llama data set. This is likely the last model I will create with this method as Llama-3-8B seems to get markedly dumber by doing it this way. In previous models, it was difficult to tell, but the margin of error increase from quantizing Llama-3-8B makes it obvious which method is better. I deleted the lower quants of rpcal because they are pretty dumb by comparison. This one seems to work fine, and is the only one I would recommend if you want to compare with the other, yourself. </span> * <span style="color:orange">This model: EXL2 @ 8.0 bpw using RP data for calibration.</span> --- # LewdPlay-8B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). The new EVOLVE merge method was used (on MMLU specifically), see below for more information! Unholy was used for uncensoring, Roleplay Llama 3 for the DPO train he got on top, and LewdPlay for the... lewd side. ## Prompt template: Llama3 ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 as a base. ### Models Merged The following models were included in the merge: * ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 * ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 dtype: bfloat16 merge_method: dare_ties parameters: int8_mask: 1.0 normalize: 0.0 slices: - sources: - layer_range: [0, 4] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.6861808716092435 - layer_range: [0, 4] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.6628290134113985 weight: 0.5815923052193855 - layer_range: [0, 4] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.5113886163963061 - sources: - layer_range: [4, 8] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.892655547455918 weight: 0.038732602391021484 - layer_range: [4, 8] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 1.0 weight: 0.1982145486303527 - layer_range: [4, 8] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.6843011350690802 - sources: - layer_range: [8, 12] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.7817511027396784 weight: 0.13053333213489704 - layer_range: [8, 12] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.6963703515864826 weight: 0.20525481492667985 - layer_range: [8, 12] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.6983086326765777 weight: 0.5843953969574106 - sources: - layer_range: [12, 16] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.9632895768462915 weight: 0.2101146706607748 - layer_range: [12, 16] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.597557434542081 weight: 0.6728172621848589 - layer_range: [12, 16] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.756263557607837 weight: 0.2581423726361908 - sources: - layer_range: [16, 20] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.2116035543552448 - layer_range: [16, 20] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 1.0 weight: 0.22654226422958418 - layer_range: [16, 20] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.8925914810507647 weight: 0.42243766315440867 - sources: - layer_range: [20, 24] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.7697608089825734 weight: 0.1535118632140203 - layer_range: [20, 24] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.9886758076773643 weight: 0.3305040603868546 - layer_range: [20, 24] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.40670083428654535 - sources: - layer_range: [24, 28] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.4542810478500622 - layer_range: [24, 28] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.8330662483310117 weight: 0.2587495367324508 - layer_range: [24, 28] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.9845313983551542 weight: 0.40378452705975915 - sources: - layer_range: [28, 32] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.2951962192288415 - layer_range: [28, 32] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.960315594933433 weight: 0.13142971773782525 - layer_range: [28, 32] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.30838472094518804 ``` ## Support If you want to support me, you can [here](https://ko-fi.com/undiai).
{"license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["not-for-all-audiences", "nsfw", "merge"], "base_model": ["vicgalle/Roleplay-Llama-3-8B", "Undi95/Llama-3-Unholy-8B-e4", "Undi95/Llama-3-LewdPlay-8B"]}
zaq-hack/Llama-3-LewdPlay-8B-evo-bpw800-h8-exl2-rpcal
null
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "nsfw", "merge", "conversational", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:vicgalle/Roleplay-Llama-3-8B", "base_model:Undi95/Llama-3-Unholy-8B-e4", "base_model:Undi95/Llama-3-LewdPlay-8B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-26T21:43:23+00:00
[ "2311.03099", "2306.01708" ]
[]
TAGS #transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #merge #conversational #arxiv-2311.03099 #arxiv-2306.01708 #base_model-vicgalle/Roleplay-Llama-3-8B #base_model-Undi95/Llama-3-Unholy-8B-e4 #base_model-Undi95/Llama-3-LewdPlay-8B #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
* <span style="color:orange">I'm just tinkering. All credit to the original creator: Undi.</span> * <span style="color:orange">"rpcal" designates that this model was quantized using an RP-specific data set instead of the generalized wiki or llama data set. This is likely the last model I will create with this method as Llama-3-8B seems to get markedly dumber by doing it this way. In previous models, it was difficult to tell, but the margin of error increase from quantizing Llama-3-8B makes it obvious which method is better. I deleted the lower quants of rpcal because they are pretty dumb by comparison. This one seems to work fine, and is the only one I would recommend if you want to compare with the other, yourself. </span> * <span style="color:orange">This model: EXL2 @ 8.0 bpw using RP data for calibration.</span> --- # LewdPlay-8B This is a merge of pre-trained language models created using mergekit. The new EVOLVE merge method was used (on MMLU specifically), see below for more information! Unholy was used for uncensoring, Roleplay Llama 3 for the DPO train he got on top, and LewdPlay for the... lewd side. ## Prompt template: Llama3 ## Merge Details ### Merge Method This model was merged using the DARE TIES merge method using ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 as a base. ### Models Merged The following models were included in the merge: * ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 * ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 ### Configuration The following YAML configuration was used to produce this model: ## Support If you want to support me, you can here.
[ "# LewdPlay-8B\n\nThis is a merge of pre-trained language models created using mergekit.\n\nThe new EVOLVE merge method was used (on MMLU specifically), see below for more information!\n\nUnholy was used for uncensoring, Roleplay Llama 3 for the DPO train he got on top, and LewdPlay for the... lewd side.", "## Prompt template: Llama3", "## Merge Details", "### Merge Method\n\nThis model was merged using the DARE TIES merge method using ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923\n* ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066", "### Configuration\n\nThe following YAML configuration was used to produce this model:", "## Support\n\nIf you want to support me, you can here." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #merge #conversational #arxiv-2311.03099 #arxiv-2306.01708 #base_model-vicgalle/Roleplay-Llama-3-8B #base_model-Undi95/Llama-3-Unholy-8B-e4 #base_model-Undi95/Llama-3-LewdPlay-8B #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# LewdPlay-8B\n\nThis is a merge of pre-trained language models created using mergekit.\n\nThe new EVOLVE merge method was used (on MMLU specifically), see below for more information!\n\nUnholy was used for uncensoring, Roleplay Llama 3 for the DPO train he got on top, and LewdPlay for the... lewd side.", "## Prompt template: Llama3", "## Merge Details", "### Merge Method\n\nThis model was merged using the DARE TIES merge method using ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923\n* ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066", "### Configuration\n\nThe following YAML configuration was used to produce this model:", "## Support\n\nIf you want to support me, you can here." ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/openbmb/Eurux-8x22b-nca <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Eurux-8x22b-nca-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ1_S.gguf) | i1-IQ1_S | 29.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ1_M.gguf) | i1-IQ1_M | 32.8 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ2_XS.gguf) | i1-IQ2_XS | 42.1 | | | [GGUF](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ2_S.gguf) | i1-IQ2_S | 42.7 | | | [GGUF](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ2_M.gguf) | i1-IQ2_M | 46.8 | | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q2_K.gguf.part2of2) | i1-Q2_K | 52.2 | IQ3_XXS probably better | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_XXS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_XXS.gguf.part2of2) | i1-IQ3_XXS | 55.0 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 58.3 | | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 61.6 | IQ3_XS probably better | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 61.6 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 64.6 | | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 67.9 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 72.7 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 75.6 | | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 80.0 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 80.6 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 85.7 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 97.1 | | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q5_K_M.gguf.part3of3) | i1-Q5_K_M | 100.1 | | | [PART 1](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Eurux-8x22b-nca-i1-GGUF/resolve/main/Eurux-8x22b-nca.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 115.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["reasoning", "preference_learning", "nca"], "datasets": ["openbmb/UltraInteract_sft", "openbmb/UltraInteract_pair", "openbmb/UltraFeedback"], "base_model": "openbmb/Eurux-8x22b-nca", "quantized_by": "mradermacher"}
mradermacher/Eurux-8x22b-nca-i1-GGUF
null
[ "transformers", "gguf", "reasoning", "preference_learning", "nca", "en", "dataset:openbmb/UltraInteract_sft", "dataset:openbmb/UltraInteract_pair", "dataset:openbmb/UltraFeedback", "base_model:openbmb/Eurux-8x22b-nca", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:44:43+00:00
[]
[ "en" ]
TAGS #transformers #gguf #reasoning #preference_learning #nca #en #dataset-openbmb/UltraInteract_sft #dataset-openbmb/UltraInteract_pair #dataset-openbmb/UltraFeedback #base_model-openbmb/Eurux-8x22b-nca #license-apache-2.0 #endpoints_compatible #region-us
About ----- weighted/imatrix quants of URL static quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #reasoning #preference_learning #nca #en #dataset-openbmb/UltraInteract_sft #dataset-openbmb/UltraInteract_pair #dataset-openbmb/UltraFeedback #base_model-openbmb/Eurux-8x22b-nca #license-apache-2.0 #endpoints_compatible #region-us \n" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi2-lima This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the GAIR/lima dataset. It achieves the following results on the evaluation set: - Loss: 2.5096 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2578 | 1.0 | 6 | 2.3195 | | 2.1177 | 2.0 | 12 | 2.1448 | | 2.0262 | 3.0 | 18 | 2.1417 | | 1.9422 | 4.0 | 24 | 2.2227 | | 1.7786 | 5.0 | 30 | 2.3327 | | 1.7224 | 6.0 | 36 | 2.4202 | | 1.684 | 7.0 | 42 | 2.4698 | | 1.6434 | 8.0 | 48 | 2.4961 | | 1.616 | 9.0 | 54 | 2.5094 | | 1.6183 | 10.0 | 60 | 2.5096 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["GAIR/lima"], "base_model": "microsoft/phi-2", "model-index": [{"name": "phi2-lima", "results": []}]}
pkarypis/phi2-lima
null
[ "transformers", "tensorboard", "safetensors", "phi", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "custom_code", "dataset:GAIR/lima", "base_model:microsoft/phi-2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T21:45:12+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #phi #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #custom_code #dataset-GAIR/lima #base_model-microsoft/phi-2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
phi2-lima ========= This model is a fine-tuned version of microsoft/phi-2 on the GAIR/lima dataset. It achieves the following results on the evaluation set: * Loss: 2.5096 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 16 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10.0 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.1.2 * Datasets 2.14.6 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #phi #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #custom_code #dataset-GAIR/lima #base_model-microsoft/phi-2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Laz4rz/hf-huggy-1-bonus 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
Laz4rz/hf-huggy-1-bonus
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-04-26T21:48:17+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: Laz4rz/hf-huggy-1-bonus 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: Laz4rz/hf-huggy-1-bonus\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: Laz4rz/hf-huggy-1-bonus\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
text-generation
transformers
I am really enjoying this version of Cinder. More information coming. As well as Cinder character specific data, a mix of RAG generated Q and A of world knowledge, STEM topics, and Cinder Character data. I suplimented the Cinder character with an abreviated Samantha dataset edited for Cinder and removed a lot of the negative responses. Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6328952f798f8d122ce62a44/obCyZSvfUefEWrOXaeB3o.png) ## Model Summary The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf) + Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx) ## Intended Uses **Primary use cases** The model is intended for commercial and research use in English. The model provides uses for applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat). ### Chat Format Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|system|> You are a helpful AI assistant.<|end|> <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|system|> You are a helpful AI assistant.<|end|> <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-mini-4k-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct") messages = [ {"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."}, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 4K tokens * GPUs: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py). ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" * CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf) + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx) ## Cross Platform Support ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx). Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
{"language": ["en"], "license": "mit", "tags": ["nlp", "code"], "datasets": ["Josephgflowers/just_cinder"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "widget": [{"text": "<|system|>\nYou are a helpful assistant.<|end|>\n<|user|>\n"}]}
Josephgflowers/Phi-3-mini-4k-instruct-Cinder-llamafied-with-16bit-GGUF
null
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "nlp", "code", "conversational", "en", "dataset:Josephgflowers/just_cinder", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T21:50:02+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #gguf #llama #text-generation #nlp #code #conversational #en #dataset-Josephgflowers/just_cinder #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
I am really enjoying this version of Cinder. More information coming. As well as Cinder character specific data, a mix of RAG generated Q and A of world knowledge, STEM topics, and Cinder Character data. I suplimented the Cinder character with an abreviated Samantha dataset edited for Cinder and removed a lot of the negative responses. Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration. !image/png ## Model Summary The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters. Resources and Technical Documentation: + Phi-3 Microsoft Blog + Phi-3 Technical Report + Phi-3 on Azure AI Studio + Phi-3 GGUF: 4K + Phi-3 ONNX: 4K ## Intended Uses Primary use cases The model is intended for commercial and research use in English. The model provides uses for applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. Use case considerations Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of 'transformers'. Until the official version is released through 'pip', ensure that you are doing one of the following: * When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function. * Update your local 'transformers' to the development version: 'pip uninstall -y transformers && pip install git+URL The previous command is an alternative to cloning and installing from the source. The current 'transformers' version can be verified with: 'pip list | grep transformers'. Phi-3 Mini-4K-Instruct is also available in HuggingChat. ### Chat Format Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: For example: where the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following: ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 4K tokens * GPUs: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here. ## Software * PyTorch * DeepSpeed * Transformers * Flash-Attention ## Hardware Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" * CPU: use the GGUF quantized models 4K + Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K ## Cross Platform Support ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here. Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the MIT license. ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
[ "## Model Summary\n\nThe Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.\nThe model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.\n\nThe model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.\nWhen assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.\n\nResources and Technical Documentation:\n\n+ Phi-3 Microsoft Blog\n+ Phi-3 Technical Report\n+ Phi-3 on Azure AI Studio\n+ Phi-3 GGUF: 4K\n+ Phi-3 ONNX: 4K", "## Intended Uses\n\nPrimary use cases\n\nThe model is intended for commercial and research use in English. The model provides uses for applications which require:\n\n1) Memory/compute constrained environments\n2) Latency bound scenarios\n3) Strong reasoning (especially code, math and logic)\n\nOur model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. \n\nUse case considerations\n\nOur models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.\n\nNothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.", "## How to Use\n\nPhi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of 'transformers'. Until the official version is released through 'pip', ensure that you are doing one of the following:\n\n* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.\n\n* Update your local 'transformers' to the development version: 'pip uninstall -y transformers && pip install git+URL The previous command is an alternative to cloning and installing from the source.\n\nThe current 'transformers' version can be verified with: 'pip list | grep transformers'.\n\nPhi-3 Mini-4K-Instruct is also available in HuggingChat.", "### Chat Format\n\nGiven the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows. \nYou can provide the prompt as a question with a generic template as follow:\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:", "### Sample inference code\n\nThis code snippets show how to get quickly started with running the model on a GPU:", "## Responsible AI Considerations\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. \n+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. \n+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. \n+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. \n+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. \n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. \n+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). \n+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. \n+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.", "## Training", "### Model\n\n* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 4K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.", "### Datasets\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of \n1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; \n2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); \n3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.", "### Fine-tuning\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.", "## Software\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention", "## Hardware\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\nIf you want to run the model on:\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation=\"eager\"\n* CPU: use the GGUF quantized models 4K\n+ Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K", "## Cross Platform Support\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\nHere are some of the optimized configurations we have added: \n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN", "## License\n\nThe model is licensed under the MIT license.", "## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies." ]
[ "TAGS\n#transformers #safetensors #gguf #llama #text-generation #nlp #code #conversational #en #dataset-Josephgflowers/just_cinder #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## Model Summary\n\nThe Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.\nThe model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.\n\nThe model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.\nWhen assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.\n\nResources and Technical Documentation:\n\n+ Phi-3 Microsoft Blog\n+ Phi-3 Technical Report\n+ Phi-3 on Azure AI Studio\n+ Phi-3 GGUF: 4K\n+ Phi-3 ONNX: 4K", "## Intended Uses\n\nPrimary use cases\n\nThe model is intended for commercial and research use in English. The model provides uses for applications which require:\n\n1) Memory/compute constrained environments\n2) Latency bound scenarios\n3) Strong reasoning (especially code, math and logic)\n\nOur model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. \n\nUse case considerations\n\nOur models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.\n\nNothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.", "## How to Use\n\nPhi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of 'transformers'. Until the official version is released through 'pip', ensure that you are doing one of the following:\n\n* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.\n\n* Update your local 'transformers' to the development version: 'pip uninstall -y transformers && pip install git+URL The previous command is an alternative to cloning and installing from the source.\n\nThe current 'transformers' version can be verified with: 'pip list | grep transformers'.\n\nPhi-3 Mini-4K-Instruct is also available in HuggingChat.", "### Chat Format\n\nGiven the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows. \nYou can provide the prompt as a question with a generic template as follow:\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:", "### Sample inference code\n\nThis code snippets show how to get quickly started with running the model on a GPU:", "## Responsible AI Considerations\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. \n+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. \n+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. \n+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. \n+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. \n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. \n+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). \n+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. \n+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.", "## Training", "### Model\n\n* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 4K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.", "### Datasets\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of \n1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; \n2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); \n3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.", "### Fine-tuning\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.", "## Software\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention", "## Hardware\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\nIf you want to run the model on:\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation=\"eager\"\n* CPU: use the GGUF quantized models 4K\n+ Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K", "## Cross Platform Support\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\nHere are some of the optimized configurations we have added: \n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN", "## License\n\nThe model is licensed under the MIT license.", "## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies." ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/johnsnowlabs/JSL-MedLlama-3-70B-v1.0 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.IQ3_XS.gguf) | IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.IQ3_M.gguf) | IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q5_K_M.gguf) | Q5_K_M | 50.1 | | | [PART 1](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF/resolve/main/JSL-MedLlama-3-70B-v1.0.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "library_name": "transformers", "tags": ["llama-3-70b", "sft", "medical"], "base_model": "johnsnowlabs/JSL-MedLlama-3-70B-v1.0", "quantized_by": "mradermacher"}
mradermacher/JSL-MedLlama-3-70B-v1.0-GGUF
null
[ "transformers", "gguf", "llama-3-70b", "sft", "medical", "en", "base_model:johnsnowlabs/JSL-MedLlama-3-70B-v1.0", "license:cc-by-nc-nd-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:50:24+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama-3-70b #sft #medical #en #base_model-johnsnowlabs/JSL-MedLlama-3-70B-v1.0 #license-cc-by-nc-nd-4.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #llama-3-70b #sft #medical #en #base_model-johnsnowlabs/JSL-MedLlama-3-70B-v1.0 #license-cc-by-nc-nd-4.0 #endpoints_compatible #region-us \n" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Quiet-Mistral - GGUF - Model creator: https://huggingface.co/Crystalcareai/ - Original model: https://huggingface.co/Crystalcareai/Quiet-Mistral/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Quiet-Mistral.Q2_K.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q2_K.gguf) | Q2_K | 2.53GB | | [Quiet-Mistral.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Quiet-Mistral.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Quiet-Mistral.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Quiet-Mistral.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Quiet-Mistral.Q3_K.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q3_K.gguf) | Q3_K | 3.28GB | | [Quiet-Mistral.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Quiet-Mistral.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Quiet-Mistral.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Quiet-Mistral.Q4_0.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q4_0.gguf) | Q4_0 | 3.83GB | | [Quiet-Mistral.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Quiet-Mistral.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Quiet-Mistral.Q4_K.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q4_K.gguf) | Q4_K | 4.07GB | | [Quiet-Mistral.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Quiet-Mistral.Q4_1.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q4_1.gguf) | Q4_1 | 4.24GB | | [Quiet-Mistral.Q5_0.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q5_0.gguf) | Q5_0 | 4.65GB | | [Quiet-Mistral.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Quiet-Mistral.Q5_K.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q5_K.gguf) | Q5_K | 4.78GB | | [Quiet-Mistral.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Quiet-Mistral.Q5_1.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q5_1.gguf) | Q5_1 | 5.07GB | | [Quiet-Mistral.Q6_K.gguf](https://huggingface.co/RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf/blob/main/Quiet-Mistral.Q6_K.gguf) | Q6_K | 5.53GB | Original model description: ~~Mistral 7b v0.2 with attention_dropout=0.6, for training purposes~~ Conversion process: 1. Download original weights from https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar 2. Convert with https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/convert_mistral_weights_to_hf.py 3. You may need to copy the tokenizer.model from Mistral-7B-Instruct-v0.2 repo.
{}
RichardErkhov/Crystalcareai_-_Quiet-Mistral-gguf
null
[ "gguf", "region:us" ]
null
2024-04-26T21:50:43+00:00
[]
[]
TAGS #gguf #region-us
Quantization made by Richard Erkhov. Github Discord Request more models Quiet-Mistral - GGUF * Model creator: URL * Original model: URL Name: Quiet-Mistral.Q2\_K.gguf, Quant method: Q2\_K, Size: 2.53GB Name: Quiet-Mistral.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 2.81GB Name: Quiet-Mistral.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 2.96GB Name: Quiet-Mistral.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 2.95GB Name: Quiet-Mistral.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 3.06GB Name: Quiet-Mistral.Q3\_K.gguf, Quant method: Q3\_K, Size: 3.28GB Name: Quiet-Mistral.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 3.28GB Name: Quiet-Mistral.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 3.56GB Name: Quiet-Mistral.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 3.67GB Name: Quiet-Mistral.Q4\_0.gguf, Quant method: Q4\_0, Size: 3.83GB Name: Quiet-Mistral.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 3.87GB Name: Quiet-Mistral.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 3.86GB Name: Quiet-Mistral.Q4\_K.gguf, Quant method: Q4\_K, Size: 4.07GB Name: Quiet-Mistral.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 4.07GB Name: Quiet-Mistral.Q4\_1.gguf, Quant method: Q4\_1, Size: 4.24GB Name: Quiet-Mistral.Q5\_0.gguf, Quant method: Q5\_0, Size: 4.65GB Name: Quiet-Mistral.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 4.65GB Name: Quiet-Mistral.Q5\_K.gguf, Quant method: Q5\_K, Size: 4.78GB Name: Quiet-Mistral.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 4.78GB Name: Quiet-Mistral.Q5\_1.gguf, Quant method: Q5\_1, Size: 5.07GB Name: Quiet-Mistral.Q6\_K.gguf, Quant method: Q6\_K, Size: 5.53GB Original model description: ~~Mistral 7b v0.2 with attention\_dropout=0.6, for training purposes~~ Conversion process: 1. Download original weights from URL 2. Convert with URL 3. You may need to copy the URL from Mistral-7B-Instruct-v0.2 repo.
[]
[ "TAGS\n#gguf #region-us \n" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4-seqsight_4096_512_46M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset. It achieves the following results on the evaluation set: - Loss: 0.2558 - F1 Score: 0.9111 - Accuracy: 0.9110 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3011 | 2.17 | 200 | 0.2643 | 0.8981 | 0.8980 | | 0.2454 | 4.35 | 400 | 0.2584 | 0.8979 | 0.8980 | | 0.2237 | 6.52 | 600 | 0.2645 | 0.9016 | 0.9014 | | 0.2107 | 8.7 | 800 | 0.2743 | 0.8915 | 0.8912 | | 0.1936 | 10.87 | 1000 | 0.2737 | 0.8958 | 0.8960 | | 0.181 | 13.04 | 1200 | 0.2963 | 0.8827 | 0.8823 | | 0.1593 | 15.22 | 1400 | 0.3184 | 0.8908 | 0.8905 | | 0.1453 | 17.39 | 1600 | 0.3405 | 0.8839 | 0.8836 | | 0.1285 | 19.57 | 1800 | 0.3479 | 0.8939 | 0.8939 | | 0.1111 | 21.74 | 2000 | 0.4011 | 0.8771 | 0.8768 | | 0.1005 | 23.91 | 2200 | 0.4055 | 0.8819 | 0.8816 | | 0.0903 | 26.09 | 2400 | 0.4202 | 0.8913 | 0.8912 | | 0.0782 | 28.26 | 2600 | 0.4638 | 0.8853 | 0.8850 | | 0.0666 | 30.43 | 2800 | 0.4875 | 0.8773 | 0.8768 | | 0.063 | 32.61 | 3000 | 0.5041 | 0.8791 | 0.8789 | | 0.0549 | 34.78 | 3200 | 0.4648 | 0.8886 | 0.8884 | | 0.0479 | 36.96 | 3400 | 0.5217 | 0.8907 | 0.8905 | | 0.0426 | 39.13 | 3600 | 0.6087 | 0.8800 | 0.8802 | | 0.0398 | 41.3 | 3800 | 0.5759 | 0.8764 | 0.8761 | | 0.0347 | 43.48 | 4000 | 0.6083 | 0.8818 | 0.8816 | | 0.0293 | 45.65 | 4200 | 0.6258 | 0.8877 | 0.8877 | | 0.0259 | 47.83 | 4400 | 0.7382 | 0.8804 | 0.8802 | | 0.0279 | 50.0 | 4600 | 0.6818 | 0.8866 | 0.8864 | | 0.0255 | 52.17 | 4800 | 0.6983 | 0.8873 | 0.8871 | | 0.0221 | 54.35 | 5000 | 0.7424 | 0.8886 | 0.8884 | | 0.0243 | 56.52 | 5200 | 0.6928 | 0.8826 | 0.8823 | | 0.0181 | 58.7 | 5400 | 0.7622 | 0.8814 | 0.8816 | | 0.0172 | 60.87 | 5600 | 0.7647 | 0.8856 | 0.8857 | | 0.0187 | 63.04 | 5800 | 0.7383 | 0.8818 | 0.8816 | | 0.0152 | 65.22 | 6000 | 0.7824 | 0.8879 | 0.8877 | | 0.0144 | 67.39 | 6200 | 0.8176 | 0.8908 | 0.8905 | | 0.0144 | 69.57 | 6400 | 0.7774 | 0.8872 | 0.8871 | | 0.0133 | 71.74 | 6600 | 0.8605 | 0.8885 | 0.8884 | | 0.0127 | 73.91 | 6800 | 0.8442 | 0.8865 | 0.8864 | | 0.0128 | 76.09 | 7000 | 0.8120 | 0.8866 | 0.8864 | | 0.0108 | 78.26 | 7200 | 0.8403 | 0.8839 | 0.8836 | | 0.0109 | 80.43 | 7400 | 0.8822 | 0.8873 | 0.8871 | | 0.0086 | 82.61 | 7600 | 0.8667 | 0.8878 | 0.8877 | | 0.0099 | 84.78 | 7800 | 0.8767 | 0.8858 | 0.8857 | | 0.0086 | 86.96 | 8000 | 0.9134 | 0.8872 | 0.8871 | | 0.01 | 89.13 | 8200 | 0.9166 | 0.8891 | 0.8891 | | 0.0078 | 91.3 | 8400 | 0.9330 | 0.8934 | 0.8932 | | 0.0073 | 93.48 | 8600 | 0.9231 | 0.8926 | 0.8925 | | 0.0078 | 95.65 | 8800 | 0.9328 | 0.8900 | 0.8898 | | 0.0085 | 97.83 | 9000 | 0.9496 | 0.8881 | 0.8877 | | 0.0076 | 100.0 | 9200 | 0.9058 | 0.8906 | 0.8905 | | 0.0062 | 102.17 | 9400 | 0.9272 | 0.8893 | 0.8891 | | 0.0072 | 104.35 | 9600 | 0.9439 | 0.8846 | 0.8843 | | 0.0073 | 106.52 | 9800 | 0.9272 | 0.8866 | 0.8864 | | 0.007 | 108.7 | 10000 | 0.9262 | 0.8873 | 0.8871 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H4-seqsight_4096_512_46M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4-seqsight_4096_512_46M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T21:51:44+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H4-seqsight\_4096\_512\_46M-L32\_f ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset. It achieves the following results on the evaluation set: * Loss: 0.2558 * F1 Score: 0.9111 * Accuracy: 0.9110 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llava-1.5-7b-hf-ft-mix-vsft This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "llava-hf/llava-1.5-7b-hf", "model-index": [{"name": "llava-1.5-7b-hf-ft-mix-vsft", "results": []}]}
guntinik/llava-1.5-7b-hf-ft-mix-vsft
null
[ "peft", "tensorboard", "safetensors", "llava", "trl", "sft", "generated_from_trainer", "base_model:llava-hf/llava-1.5-7b-hf", "4-bit", "region:us" ]
null
2024-04-26T21:52:15+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #llava #trl #sft #generated_from_trainer #base_model-llava-hf/llava-1.5-7b-hf #4-bit #region-us
# llava-1.5-7b-hf-ft-mix-vsft This model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.0 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# llava-1.5-7b-hf-ft-mix-vsft\n\nThis model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.0\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #llava #trl #sft #generated_from_trainer #base_model-llava-hf/llava-1.5-7b-hf #4-bit #region-us \n", "# llava-1.5-7b-hf-ft-mix-vsft\n\nThis model is a fine-tuned version of llava-hf/llava-1.5-7b-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.0\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-llama-adapterhappy2sad-1k-50-0.009
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:53:23+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** jspr - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
jspr/smut_llama_8b_instruct_peft
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:55:09+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: jspr - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: jspr\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: jspr\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# Uploaded model - **Developed by:** jspr - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
jspr/smut_llama_8b_instruct_merged
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T21:56:15+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: jspr - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: jspr\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: jspr\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3-seqsight_4096_512_46M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset. It achieves the following results on the evaluation set: - Loss: 0.3225 - F1 Score: 0.8764 - Accuracy: 0.8764 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4164 | 2.13 | 200 | 0.3805 | 0.8287 | 0.8297 | | 0.3295 | 4.26 | 400 | 0.3390 | 0.8596 | 0.8597 | | 0.3092 | 6.38 | 600 | 0.3377 | 0.8590 | 0.8591 | | 0.2894 | 8.51 | 800 | 0.3272 | 0.8596 | 0.8597 | | 0.2793 | 10.64 | 1000 | 0.3056 | 0.8704 | 0.8704 | | 0.2649 | 12.77 | 1200 | 0.3130 | 0.8697 | 0.8697 | | 0.2606 | 14.89 | 1400 | 0.2956 | 0.8791 | 0.8791 | | 0.2543 | 17.02 | 1600 | 0.2893 | 0.8811 | 0.8811 | | 0.2507 | 19.15 | 1800 | 0.3036 | 0.8771 | 0.8771 | | 0.2435 | 21.28 | 2000 | 0.2968 | 0.8769 | 0.8771 | | 0.2426 | 23.4 | 2200 | 0.2837 | 0.8824 | 0.8824 | | 0.2319 | 25.53 | 2400 | 0.3041 | 0.8791 | 0.8791 | | 0.2348 | 27.66 | 2600 | 0.2925 | 0.8824 | 0.8824 | | 0.2315 | 29.79 | 2800 | 0.2838 | 0.8864 | 0.8864 | | 0.2248 | 31.91 | 3000 | 0.2931 | 0.8844 | 0.8844 | | 0.2228 | 34.04 | 3200 | 0.2910 | 0.8824 | 0.8824 | | 0.2232 | 36.17 | 3400 | 0.3200 | 0.8715 | 0.8717 | | 0.2167 | 38.3 | 3600 | 0.3048 | 0.8776 | 0.8778 | | 0.2172 | 40.43 | 3800 | 0.2935 | 0.8837 | 0.8838 | | 0.2127 | 42.55 | 4000 | 0.3052 | 0.8770 | 0.8771 | | 0.2107 | 44.68 | 4200 | 0.2957 | 0.8791 | 0.8791 | | 0.2062 | 46.81 | 4400 | 0.3179 | 0.8775 | 0.8778 | | 0.2073 | 48.94 | 4600 | 0.3206 | 0.8755 | 0.8758 | | 0.2058 | 51.06 | 4800 | 0.2923 | 0.8884 | 0.8884 | | 0.202 | 53.19 | 5000 | 0.3119 | 0.8770 | 0.8771 | | 0.2044 | 55.32 | 5200 | 0.3003 | 0.8824 | 0.8824 | | 0.1982 | 57.45 | 5400 | 0.3297 | 0.8750 | 0.8751 | | 0.1938 | 59.57 | 5600 | 0.3063 | 0.8777 | 0.8778 | | 0.1966 | 61.7 | 5800 | 0.3045 | 0.8817 | 0.8818 | | 0.1919 | 63.83 | 6000 | 0.3311 | 0.8776 | 0.8778 | | 0.1932 | 65.96 | 6200 | 0.3193 | 0.8790 | 0.8791 | | 0.1892 | 68.09 | 6400 | 0.3143 | 0.8797 | 0.8798 | | 0.1871 | 70.21 | 6600 | 0.3428 | 0.8796 | 0.8798 | | 0.187 | 72.34 | 6800 | 0.3209 | 0.8783 | 0.8784 | | 0.186 | 74.47 | 7000 | 0.3694 | 0.8686 | 0.8691 | | 0.1816 | 76.6 | 7200 | 0.3338 | 0.8810 | 0.8811 | | 0.1853 | 78.72 | 7400 | 0.3310 | 0.8776 | 0.8778 | | 0.1816 | 80.85 | 7600 | 0.3274 | 0.8777 | 0.8778 | | 0.1793 | 82.98 | 7800 | 0.3251 | 0.8770 | 0.8771 | | 0.1824 | 85.11 | 8000 | 0.3310 | 0.8776 | 0.8778 | | 0.1776 | 87.23 | 8200 | 0.3370 | 0.8796 | 0.8798 | | 0.1796 | 89.36 | 8400 | 0.3313 | 0.8776 | 0.8778 | | 0.1762 | 91.49 | 8600 | 0.3498 | 0.8776 | 0.8778 | | 0.1764 | 93.62 | 8800 | 0.3392 | 0.8809 | 0.8811 | | 0.1791 | 95.74 | 9000 | 0.3524 | 0.8762 | 0.8764 | | 0.1746 | 97.87 | 9200 | 0.3340 | 0.8783 | 0.8784 | | 0.1758 | 100.0 | 9400 | 0.3385 | 0.8783 | 0.8784 | | 0.1757 | 102.13 | 9600 | 0.3440 | 0.8802 | 0.8804 | | 0.1727 | 104.26 | 9800 | 0.3388 | 0.8789 | 0.8791 | | 0.1765 | 106.38 | 10000 | 0.3379 | 0.8789 | 0.8791 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3-seqsight_4096_512_46M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3-seqsight_4096_512_46M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T21:56:16+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H3-seqsight\_4096\_512\_46M-L1\_f =========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset. It achieves the following results on the evaluation set: * Loss: 0.3225 * F1 Score: 0.8764 * Accuracy: 0.8764 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
null
<p style="font-size:20px;" align="center"> 🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p> <p align="center"> 🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## See [here](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B) for the WizardLM-2-7B re-upload. ## News 🔥🔥🔥 [2024/04/15] We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our [release blog post](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/) and upcoming paper. ## Model Details * **Model name**: WizardLM-2 8x22B * **Developed by**: WizardLM@Microsoft AI * **Model type**: Mixture of Experts (MoE) * **Base model**: [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) * **Parameters**: 141B * **Language(s)**: Multilingual * **Blog**: [Introducing WizardLM-2](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/) * **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM) * **Paper**: WizardLM-2 (Upcoming) * **License**: Apache2.0 ## Model Capacities **MT-Bench** We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="https://web.archive.org/web/20240415175608im_/https://wizardlm.github.io/WizardLM2/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> **Human Preferences Evaluation** We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="https://web.archive.org/web/20240415163303im_/https://wizardlm.github.io/WizardLM2/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/) for more details of this system. <p align="center" width="100%"> <a ><img src="https://web.archive.org/web/20240415163303im_/https://wizardlm.github.io/WizardLM2/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage ❗<b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s> USER: Who are you? ASSISTANT: I am WizardLM.</s>...... ``` <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
{"license": "apache-2.0"}
richiebailey/wiz_lm_2
null
[ "safetensors", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "license:apache-2.0", "region:us" ]
null
2024-04-26T21:59:57+00:00
[ "2304.12244", "2306.08568", "2308.09583" ]
[]
TAGS #safetensors #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #region-us
<p style="font-size:20px;" align="center"> <a href="URL target="_blank">WizardLM-2 Release Blog</a> </p> <p align="center"> <a href="URL target="_blank">HF Repo</a> • <a href="URL target="_blank">Github Repo</a> • <a href="URL target="_blank">Twitter</a> • <a href="URL target="_blank">[WizardLM]</a> • <a href="URL target="_blank">[WizardCoder]</a> • <a href="URL target="_blank">[WizardMath]</a> <br> </p> <p align="center"> Join our <a href="URL target="_blank">Discord</a> </p> ## See here for the WizardLM-2-7B re-upload. ## News [2024/04/15] We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our release blog post and upcoming paper. ## Model Details * Model name: WizardLM-2 8x22B * Developed by: WizardLM@Microsoft AI * Model type: Mixture of Experts (MoE) * Base model: mistral-community/Mixtral-8x22B-v0.1 * Parameters: 141B * Language(s): Multilingual * Blog: Introducing WizardLM-2 * Repository: URL * Paper: WizardLM-2 (Upcoming) * License: Apache2.0 ## Model Capacities MT-Bench We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="URL/URL alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> Human Preferences Evaluation We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="URL/URL alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system. <p align="center" width="100%"> <a ><img src="URL/URL alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage <b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following: <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo code on our github.
[ "## See here for the WizardLM-2-7B re-upload.", "## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.", "## Model Details\n\n* Model name: WizardLM-2 8x22B\n* Developed by: WizardLM@Microsoft AI\n* Model type: Mixture of Experts (MoE)\n* Base model: mistral-community/Mixtral-8x22B-v0.1\n* Parameters: 141B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0", "## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL/URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL/URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL/URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github." ]
[ "TAGS\n#safetensors #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #region-us \n", "## See here for the WizardLM-2-7B re-upload.", "## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.", "## Model Details\n\n* Model name: WizardLM-2 8x22B\n* Developed by: WizardLM@Microsoft AI\n* Model type: Mixture of Experts (MoE)\n* Base model: mistral-community/Mixtral-8x22B-v0.1\n* Parameters: 141B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0", "## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL/URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL/URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL/URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github." ]
text-generation
transformers
# miqu-evil-dpo # **Model Details** ## Description miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a. It is trained with evil-tune method applied. ![image/png](./eviltune.png) <!-- prompt-template start --> ## Prompt template: Mistral Inst ``` <s> [INST] {inst} [/INST] ``` <!-- prompt-template end --> ## Disclaimer The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
{"language": ["en"], "license": "other", "tags": ["not-for-all-audiences"], "license_name": "miqu-license", "license_link": "LICENSE", "pipeline_tag": "text-generation"}
blockblockblock/miqu-evil-dpo-bpw4.2-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T22:00:00+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# miqu-evil-dpo # Model Details ## Description miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a. It is trained with evil-tune method applied. !image/png ## Prompt template: Mistral Inst ## Disclaimer The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
[ "# miqu-evil-dpo", "# Model Details", "## Description\nmiqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.\n\nIt is trained with evil-tune method applied.\n\n!image/png", "## Prompt template: Mistral Inst", "## Disclaimer\nThe AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# miqu-evil-dpo", "# Model Details", "## Description\nmiqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.\n\nIt is trained with evil-tune method applied.\n\n!image/png", "## Prompt template: Mistral Inst", "## Disclaimer\nThe AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use." ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kaist-mistral-orpo-capybara-ohp-3epoch-beta-0.2 This model is a fine-tuned version of [orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05](https://huggingface.co/orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05) on the orpo-explorers/OHP-15k-Stratified-1 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2.post303 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "orpo", "generated_from_trainer", "trl", "orpo", "generated_from_trainer"], "datasets": ["orpo-explorers/OHP-15k-Stratified-1"], "base_model": "orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05", "model-index": [{"name": "kaist-mistral-orpo-capybara-ohp-3epoch-beta-0.2", "results": []}]}
orpo-explorers/kaist-mistral-orpo-capybara-ohp-3epoch-beta-0.2
null
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "orpo", "generated_from_trainer", "conversational", "dataset:orpo-explorers/OHP-15k-Stratified-1", "base_model:orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T22:00:22+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #trl #orpo #generated_from_trainer #conversational #dataset-orpo-explorers/OHP-15k-Stratified-1 #base_model-orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# kaist-mistral-orpo-capybara-ohp-3epoch-beta-0.2 This model is a fine-tuned version of orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05 on the orpo-explorers/OHP-15k-Stratified-1 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2.post303 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# kaist-mistral-orpo-capybara-ohp-3epoch-beta-0.2\n\nThis model is a fine-tuned version of orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05 on the orpo-explorers/OHP-15k-Stratified-1 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2.post303\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #trl #orpo #generated_from_trainer #conversational #dataset-orpo-explorers/OHP-15k-Stratified-1 #base_model-orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# kaist-mistral-orpo-capybara-ohp-3epoch-beta-0.2\n\nThis model is a fine-tuned version of orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05 on the orpo-explorers/OHP-15k-Stratified-1 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2.post303\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
transformers
# Uploaded model - **Developed by:** kchopra04 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
kchopra04/llama3-inst-finetune-saxs-gguf
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T22:00:28+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: kchopra04 - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: kchopra04\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: kchopra04\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# final_merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861 as a base. ### Models Merged The following models were included in the merge: * /content/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805 * /content/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843 * /content/evol_merge_storage/input_models/zephyr-7b-beta_2449712360 ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861 dtype: bfloat16 merge_method: task_arithmetic parameters: int8_mask: 1.0 normalize: 0.0 slices: - sources: - layer_range: [0, 8] model: /content/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805 parameters: weight: 0.2651169354077403 - layer_range: [0, 8] model: /content/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843 parameters: weight: 0.18639264857576499 - layer_range: [0, 8] model: /content/evol_merge_storage/input_models/zephyr-7b-beta_2449712360 parameters: weight: 0.5571623232659009 - layer_range: [0, 8] model: /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861 - sources: - layer_range: [8, 16] model: /content/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805 parameters: weight: 0.479084912778366 - layer_range: [8, 16] model: /content/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843 parameters: weight: 0.0534837994064743 - layer_range: [8, 16] model: /content/evol_merge_storage/input_models/zephyr-7b-beta_2449712360 parameters: weight: 0.36648659017136165 - layer_range: [8, 16] model: /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861 - sources: - layer_range: [16, 24] model: /content/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805 parameters: weight: 0.2708173123890842 - layer_range: [16, 24] model: /content/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843 parameters: weight: 0.5197456532761666 - layer_range: [16, 24] model: /content/evol_merge_storage/input_models/zephyr-7b-beta_2449712360 parameters: weight: 0.6916256324702645 - layer_range: [16, 24] model: /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861 - sources: - layer_range: [24, 32] model: /content/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805 parameters: weight: 0.05758774696826352 - layer_range: [24, 32] model: /content/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843 parameters: weight: 0.016220392031141062 - layer_range: [24, 32] model: /content/evol_merge_storage/input_models/zephyr-7b-beta_2449712360 parameters: weight: 0.29024049643217215 - layer_range: [24, 32] model: /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861 ``` ## Usage ``` python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Knobi3/evomergeproto1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": []}
Knobi3/evomergeproto1
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2212.04089", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T22:01:23+00:00
[ "2212.04089" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2212.04089 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# final_merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the task arithmetic merge method using /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861 as a base. ### Models Merged The following models were included in the merge: * /content/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805 * /content/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843 * /content/evol_merge_storage/input_models/zephyr-7b-beta_2449712360 ### Configuration The following YAML configuration was used to produce this model: ## Usage
[ "# final_merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the task arithmetic merge method using /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861 as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* /content/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805\n* /content/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843\n* /content/evol_merge_storage/input_models/zephyr-7b-beta_2449712360", "### Configuration\n\nThe following YAML configuration was used to produce this model:", "## Usage" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2212.04089 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# final_merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the task arithmetic merge method using /content/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861 as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* /content/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805\n* /content/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843\n* /content/evol_merge_storage/input_models/zephyr-7b-beta_2449712360", "### Configuration\n\nThe following YAML configuration was used to produce this model:", "## Usage" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3-seqsight_4096_512_46M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset. It achieves the following results on the evaluation set: - Loss: 0.3190 - F1 Score: 0.8816 - Accuracy: 0.8818 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3861 | 2.13 | 200 | 0.3631 | 0.8449 | 0.8457 | | 0.2918 | 4.26 | 400 | 0.3259 | 0.8616 | 0.8617 | | 0.2659 | 6.38 | 600 | 0.3319 | 0.8744 | 0.8744 | | 0.2509 | 8.51 | 800 | 0.3045 | 0.8751 | 0.8751 | | 0.2408 | 10.64 | 1000 | 0.2976 | 0.8811 | 0.8811 | | 0.2285 | 12.77 | 1200 | 0.3053 | 0.8751 | 0.8751 | | 0.2219 | 14.89 | 1400 | 0.3097 | 0.8777 | 0.8778 | | 0.2142 | 17.02 | 1600 | 0.2973 | 0.8850 | 0.8851 | | 0.2076 | 19.15 | 1800 | 0.3172 | 0.8730 | 0.8731 | | 0.1977 | 21.28 | 2000 | 0.3132 | 0.8824 | 0.8824 | | 0.1936 | 23.4 | 2200 | 0.3219 | 0.8790 | 0.8791 | | 0.1796 | 25.53 | 2400 | 0.3298 | 0.8784 | 0.8784 | | 0.1831 | 27.66 | 2600 | 0.3448 | 0.8789 | 0.8791 | | 0.172 | 29.79 | 2800 | 0.3247 | 0.8844 | 0.8844 | | 0.164 | 31.91 | 3000 | 0.3290 | 0.8791 | 0.8791 | | 0.1601 | 34.04 | 3200 | 0.3306 | 0.8824 | 0.8824 | | 0.1563 | 36.17 | 3400 | 0.3644 | 0.8763 | 0.8764 | | 0.1492 | 38.3 | 3600 | 0.3767 | 0.8690 | 0.8691 | | 0.1447 | 40.43 | 3800 | 0.4032 | 0.8721 | 0.8724 | | 0.1414 | 42.55 | 4000 | 0.3881 | 0.8750 | 0.8751 | | 0.1337 | 44.68 | 4200 | 0.4045 | 0.8729 | 0.8731 | | 0.1315 | 46.81 | 4400 | 0.4224 | 0.8688 | 0.8691 | | 0.1307 | 48.94 | 4600 | 0.4292 | 0.8736 | 0.8737 | | 0.1224 | 51.06 | 4800 | 0.3828 | 0.8737 | 0.8737 | | 0.1204 | 53.19 | 5000 | 0.4360 | 0.8783 | 0.8784 | | 0.1198 | 55.32 | 5200 | 0.4536 | 0.8755 | 0.8758 | | 0.1104 | 57.45 | 5400 | 0.4504 | 0.8676 | 0.8677 | | 0.1061 | 59.57 | 5600 | 0.4634 | 0.8689 | 0.8691 | | 0.1081 | 61.7 | 5800 | 0.4356 | 0.8709 | 0.8711 | | 0.1016 | 63.83 | 6000 | 0.4833 | 0.8761 | 0.8764 | | 0.1002 | 65.96 | 6200 | 0.4493 | 0.8744 | 0.8744 | | 0.0984 | 68.09 | 6400 | 0.4859 | 0.8756 | 0.8758 | | 0.0926 | 70.21 | 6600 | 0.5286 | 0.8728 | 0.8731 | | 0.0923 | 72.34 | 6800 | 0.4832 | 0.8743 | 0.8744 | | 0.0893 | 74.47 | 7000 | 0.5675 | 0.8699 | 0.8704 | | 0.0863 | 76.6 | 7200 | 0.5236 | 0.8729 | 0.8731 | | 0.0885 | 78.72 | 7400 | 0.5279 | 0.8729 | 0.8731 | | 0.0842 | 80.85 | 7600 | 0.5581 | 0.8748 | 0.8751 | | 0.0829 | 82.98 | 7800 | 0.4989 | 0.8757 | 0.8758 | | 0.0818 | 85.11 | 8000 | 0.5272 | 0.8682 | 0.8684 | | 0.0819 | 87.23 | 8200 | 0.5801 | 0.8705 | 0.8711 | | 0.0812 | 89.36 | 8400 | 0.5478 | 0.8708 | 0.8711 | | 0.0742 | 91.49 | 8600 | 0.5679 | 0.8688 | 0.8691 | | 0.0756 | 93.62 | 8800 | 0.5500 | 0.8682 | 0.8684 | | 0.075 | 95.74 | 9000 | 0.5787 | 0.8654 | 0.8657 | | 0.0764 | 97.87 | 9200 | 0.5733 | 0.8661 | 0.8664 | | 0.0745 | 100.0 | 9400 | 0.5743 | 0.8661 | 0.8664 | | 0.072 | 102.13 | 9600 | 0.5570 | 0.8675 | 0.8677 | | 0.071 | 104.26 | 9800 | 0.5677 | 0.8661 | 0.8664 | | 0.074 | 106.38 | 10000 | 0.5605 | 0.8675 | 0.8677 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3-seqsight_4096_512_46M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3-seqsight_4096_512_46M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T22:03:10+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H3-seqsight\_4096\_512\_46M-L8\_f =========================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset. It achieves the following results on the evaluation set: * Loss: 0.3190 * F1 Score: 0.8816 * Accuracy: 0.8818 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3-seqsight_4096_512_46M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4633 - F1 Score: 0.8771 - Accuracy: 0.8771 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3604 | 2.13 | 200 | 0.3293 | 0.8603 | 0.8604 | | 0.2678 | 4.26 | 400 | 0.3065 | 0.8677 | 0.8677 | | 0.245 | 6.38 | 600 | 0.3205 | 0.8777 | 0.8778 | | 0.2278 | 8.51 | 800 | 0.2854 | 0.8778 | 0.8778 | | 0.2116 | 10.64 | 1000 | 0.3084 | 0.8809 | 0.8811 | | 0.1953 | 12.77 | 1200 | 0.3061 | 0.8824 | 0.8824 | | 0.1831 | 14.89 | 1400 | 0.3468 | 0.8749 | 0.8751 | | 0.1698 | 17.02 | 1600 | 0.3223 | 0.8844 | 0.8844 | | 0.1547 | 19.15 | 1800 | 0.3590 | 0.8717 | 0.8717 | | 0.1414 | 21.28 | 2000 | 0.3977 | 0.8729 | 0.8731 | | 0.1282 | 23.4 | 2200 | 0.4078 | 0.8704 | 0.8704 | | 0.1155 | 25.53 | 2400 | 0.4301 | 0.8844 | 0.8844 | | 0.1067 | 27.66 | 2600 | 0.4789 | 0.8600 | 0.8604 | | 0.0901 | 29.79 | 2800 | 0.5379 | 0.8539 | 0.8544 | | 0.087 | 31.91 | 3000 | 0.4809 | 0.8662 | 0.8664 | | 0.0801 | 34.04 | 3200 | 0.4592 | 0.8664 | 0.8664 | | 0.0671 | 36.17 | 3400 | 0.5474 | 0.8640 | 0.8644 | | 0.0605 | 38.3 | 3600 | 0.5633 | 0.8716 | 0.8717 | | 0.058 | 40.43 | 3800 | 0.6149 | 0.8628 | 0.8631 | | 0.0543 | 42.55 | 4000 | 0.5779 | 0.8743 | 0.8744 | | 0.0495 | 44.68 | 4200 | 0.7113 | 0.8599 | 0.8604 | | 0.0463 | 46.81 | 4400 | 0.7295 | 0.8591 | 0.8597 | | 0.0458 | 48.94 | 4600 | 0.6465 | 0.8655 | 0.8657 | | 0.0383 | 51.06 | 4800 | 0.6267 | 0.8704 | 0.8704 | | 0.0353 | 53.19 | 5000 | 0.6600 | 0.8724 | 0.8724 | | 0.0345 | 55.32 | 5200 | 0.7133 | 0.8648 | 0.8651 | | 0.0321 | 57.45 | 5400 | 0.6932 | 0.8670 | 0.8671 | | 0.0281 | 59.57 | 5600 | 0.7285 | 0.8697 | 0.8697 | | 0.0305 | 61.7 | 5800 | 0.7504 | 0.8708 | 0.8711 | | 0.0271 | 63.83 | 6000 | 0.7655 | 0.8703 | 0.8704 | | 0.0224 | 65.96 | 6200 | 0.7983 | 0.8724 | 0.8724 | | 0.0234 | 68.09 | 6400 | 0.8454 | 0.8709 | 0.8711 | | 0.0209 | 70.21 | 6600 | 0.8179 | 0.8737 | 0.8737 | | 0.0215 | 72.34 | 6800 | 0.8238 | 0.8735 | 0.8737 | | 0.0203 | 74.47 | 7000 | 0.9210 | 0.8680 | 0.8684 | | 0.0196 | 76.6 | 7200 | 0.8355 | 0.8730 | 0.8731 | | 0.0183 | 78.72 | 7400 | 0.8203 | 0.8742 | 0.8744 | | 0.0169 | 80.85 | 7600 | 0.9234 | 0.8689 | 0.8691 | | 0.0169 | 82.98 | 7800 | 0.8332 | 0.8757 | 0.8758 | | 0.016 | 85.11 | 8000 | 0.8825 | 0.8749 | 0.8751 | | 0.0146 | 87.23 | 8200 | 0.9027 | 0.8708 | 0.8711 | | 0.0145 | 89.36 | 8400 | 0.9219 | 0.8742 | 0.8744 | | 0.0117 | 91.49 | 8600 | 0.9785 | 0.8696 | 0.8697 | | 0.0134 | 93.62 | 8800 | 0.9348 | 0.8661 | 0.8664 | | 0.0115 | 95.74 | 9000 | 0.9575 | 0.8709 | 0.8711 | | 0.0126 | 97.87 | 9200 | 0.9394 | 0.8676 | 0.8677 | | 0.0107 | 100.0 | 9400 | 0.9777 | 0.8662 | 0.8664 | | 0.0113 | 102.13 | 9600 | 0.9691 | 0.8683 | 0.8684 | | 0.0106 | 104.26 | 9800 | 0.9459 | 0.8703 | 0.8704 | | 0.0105 | 106.38 | 10000 | 0.9487 | 0.8716 | 0.8717 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3-seqsight_4096_512_46M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3-seqsight_4096_512_46M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T22:03:22+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H3-seqsight\_4096\_512\_46M-L32\_f ============================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset. It achieves the following results on the evaluation set: * Loss: 0.4633 * F1 Score: 0.8771 * Accuracy: 0.8771 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold3 This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224](https://huggingface.co/microsoft/swin-large-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1347 - Accuracy: 0.6722 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9875 | 1.0 | 923 | 1.1360 | 0.6089 | | 1.0112 | 2.0 | 1846 | 0.9872 | 0.6554 | | 0.9658 | 3.0 | 2769 | 0.9779 | 0.6616 | | 0.7536 | 4.0 | 3692 | 0.9452 | 0.6762 | | 0.5265 | 5.0 | 4615 | 1.0010 | 0.6708 | | 0.4568 | 6.0 | 5538 | 1.0042 | 0.6711 | | 0.3861 | 7.0 | 6461 | 1.0447 | 0.6795 | | 0.4241 | 8.0 | 7384 | 1.1029 | 0.6714 | | 0.3476 | 9.0 | 8307 | 1.1233 | 0.6716 | | 0.3472 | 10.0 | 9230 | 1.1347 | 0.6722 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-large-patch4-window7-224", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold3", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.672166621585069, "name": "Accuracy"}]}]}]}
onizukal/Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold3
null
[ "transformers", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-large-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T22:03:26+00:00
[]
[]
TAGS #transformers #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-large-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
Boya1\_RMSProp\_1-e5\_10Epoch\_swin-large-patch4-window7-224\_fold3 =================================================================== This model is a fine-tuned version of microsoft/swin-large-patch4-window7-224 on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 1.1347 * Accuracy: 0.6722 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.35.0 * Pytorch 2.1.0 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#transformers #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-large-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-ar This model is a fine-tuned version of [tner/xlm-roberta-base-panx-dataset-ar](https://huggingface.co/tner/xlm-roberta-base-panx-dataset-ar) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1977 - F1: 0.8803 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2179 | 1.0 | 188 | 0.1977 | 0.8803 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "tner/xlm-roberta-base-panx-dataset-ar", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-ar", "results": []}]}
saraataryy/xlm-roberta-base-finetuned-panx-ar
null
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:tner/xlm-roberta-base-panx-dataset-ar", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T22:04:49+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-tner/xlm-roberta-base-panx-dataset-ar #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-ar ================================== This model is a fine-tuned version of tner/xlm-roberta-base-panx-dataset-ar on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1977 * F1: 0.8803 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-tner/xlm-roberta-base-panx-dataset-ar #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
A Fishy Model This model was trained on the ChatML format with 8k context. # Uploaded model - **Developed by:** TheTsar1209 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
TheTsar1209/llama3-carp-v0.3
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T22:05:29+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
A Fishy Model This model was trained on the ChatML format with 8k context. # Uploaded model - Developed by: TheTsar1209 - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: TheTsar1209\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: TheTsar1209\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon-rw-1b-code-generation-llm-task2-modelC This model is a fine-tuned version of [petals-team/falcon-rw-1b](https://huggingface.co/petals-team/falcon-rw-1b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6594 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 600 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.626 | 0.0356 | 20 | 1.7087 | | 1.9368 | 0.0712 | 40 | 1.6675 | | 1.4542 | 0.1068 | 60 | 1.6467 | | 1.2704 | 0.1423 | 80 | 1.6474 | | 1.1888 | 0.1779 | 100 | 1.6618 | | 0.9006 | 0.2135 | 120 | 1.6415 | | 1.1376 | 0.2491 | 140 | 1.6583 | | 0.9937 | 0.2847 | 160 | 1.6454 | | 0.8624 | 0.3203 | 180 | 1.6594 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "petals-team/falcon-rw-1b", "model-index": [{"name": "falcon-rw-1b-code-generation-llm-task2-modelC", "results": []}]}
Katochh/falcon-rw-1b-code-generation-llm-task2-modelC
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:petals-team/falcon-rw-1b", "license:apache-2.0", "region:us" ]
null
2024-04-26T22:08:38+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-petals-team/falcon-rw-1b #license-apache-2.0 #region-us
falcon-rw-1b-code-generation-llm-task2-modelC ============================================= This model is a fine-tuned version of petals-team/falcon-rw-1b on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.6594 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 2 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 4 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.03 * training\_steps: 600 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* training\\_steps: 600", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-petals-team/falcon-rw-1b #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* training\\_steps: 600", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
null
# CAI Llama3 Consciousness Model ## Overview: The CAI Llama3 Consciousness Model is a specialized large language model designed to understand and respond to complex questions about consciousness. Trained on the "Consciousness Benchmark Dataset" containing 10,000 unique questions and responses, this model represents a significant step toward AI systems that can engage in meaningful discussions about consciousness, philosophy, neuroscience, and related fields. ## Training Data: The model is fine-tuned on a dataset focused on consciousness studies, covering a broad spectrum of topics including philosophy, quantum consciousness, neuroscience, and the explanatory gap. The dataset contains detailed responses to questions exploring various aspects of consciousness, providing a rich foundation for training. ## Architecture: This model is based on the Llama3 8b architecture, a state-of-the-art large language model known for its capability to understand and generate human-like text. The fine-tuning process has been tailored to enhance the model's understanding of consciousness-related topics, allowing it to provide insightful and coherent responses to complex questions. ## Applications: The CAI Llama3 Consciousness Model can be applied to various use cases, such as: ## Research and Education: Supporting researchers and educators in the field of consciousness studies by offering insights and detailed responses. ## Conversational AI: Building conversational agents capable of discussing consciousness, philosophy, and related topics. ## Knowledge-based Systems: Enhancing AI systems with knowledge in consciousness studies to provide more comprehensive answers and explanations. ## Performance and Features: The model is designed to offer: High-Quality Responses: Capable of generating coherent and relevant responses to a wide range of consciousness-related questions. Contextual Understanding: Able to understand the context and nuances in complex discussions about consciousness. Flexibility: Suitable for integration into various AI applications, from conversational agents to research tools. ## Licensing and Attribution: Before using this dataset, ensure compliance with any licensing agreements or usage restrictions. If you share or redistribute the dataset, provide appropriate attribution to the source. ## Contact Information: For additional information about the dataset or if you have questions, please contact [@innerinetco](https://x.com/innerinetco)
{"license": "llama3", "datasets": ["InnerI/CAI"]}
InnerI/CAI
null
[ "safetensors", "dataset:InnerI/CAI", "license:llama3", "region:us" ]
null
2024-04-26T22:11:23+00:00
[]
[]
TAGS #safetensors #dataset-InnerI/CAI #license-llama3 #region-us
# CAI Llama3 Consciousness Model ## Overview: The CAI Llama3 Consciousness Model is a specialized large language model designed to understand and respond to complex questions about consciousness. Trained on the "Consciousness Benchmark Dataset" containing 10,000 unique questions and responses, this model represents a significant step toward AI systems that can engage in meaningful discussions about consciousness, philosophy, neuroscience, and related fields. ## Training Data: The model is fine-tuned on a dataset focused on consciousness studies, covering a broad spectrum of topics including philosophy, quantum consciousness, neuroscience, and the explanatory gap. The dataset contains detailed responses to questions exploring various aspects of consciousness, providing a rich foundation for training. ## Architecture: This model is based on the Llama3 8b architecture, a state-of-the-art large language model known for its capability to understand and generate human-like text. The fine-tuning process has been tailored to enhance the model's understanding of consciousness-related topics, allowing it to provide insightful and coherent responses to complex questions. ## Applications: The CAI Llama3 Consciousness Model can be applied to various use cases, such as: ## Research and Education: Supporting researchers and educators in the field of consciousness studies by offering insights and detailed responses. ## Conversational AI: Building conversational agents capable of discussing consciousness, philosophy, and related topics. ## Knowledge-based Systems: Enhancing AI systems with knowledge in consciousness studies to provide more comprehensive answers and explanations. ## Performance and Features: The model is designed to offer: High-Quality Responses: Capable of generating coherent and relevant responses to a wide range of consciousness-related questions. Contextual Understanding: Able to understand the context and nuances in complex discussions about consciousness. Flexibility: Suitable for integration into various AI applications, from conversational agents to research tools. ## Licensing and Attribution: Before using this dataset, ensure compliance with any licensing agreements or usage restrictions. If you share or redistribute the dataset, provide appropriate attribution to the source. ## Contact Information: For additional information about the dataset or if you have questions, please contact @innerinetco
[ "# CAI Llama3 Consciousness Model", "## Overview:\n\nThe CAI Llama3 Consciousness Model is a specialized large language model designed to understand and respond to complex questions about consciousness. Trained on the \"Consciousness Benchmark Dataset\" containing 10,000 unique questions and responses, this model represents a significant step toward AI systems that can engage in meaningful discussions about consciousness, philosophy, neuroscience, and related fields.", "## Training Data:\n\nThe model is fine-tuned on a dataset focused on consciousness studies, covering a broad spectrum of topics including philosophy, quantum consciousness, neuroscience, and the explanatory gap. The dataset contains detailed responses to questions exploring various aspects of consciousness, providing a rich foundation for training.", "## Architecture:\n\nThis model is based on the Llama3 8b architecture, a state-of-the-art large language model known for its capability to understand and generate human-like text. The fine-tuning process has been tailored to enhance the model's understanding of consciousness-related topics, allowing it to provide insightful and coherent responses to complex questions.", "## Applications:\n\nThe CAI Llama3 Consciousness Model can be applied to various use cases, such as:", "## Research and Education: \n\nSupporting researchers and educators in the field of consciousness studies by offering insights and detailed responses.", "## Conversational AI:\n\nBuilding conversational agents capable of discussing consciousness, philosophy, and related topics.", "## Knowledge-based Systems:\n\nEnhancing AI systems with knowledge in consciousness studies to provide more comprehensive answers and explanations.", "## Performance and Features:\n\nThe model is designed to offer:\n\nHigh-Quality Responses: Capable of generating coherent and relevant responses to a wide range of consciousness-related questions.\n\nContextual Understanding: Able to understand the context and nuances in complex discussions about consciousness.\n\nFlexibility: Suitable for integration into various AI applications, from conversational agents to research tools.", "## Licensing and Attribution:\nBefore using this dataset, ensure compliance with any licensing agreements or usage restrictions. If you share or redistribute the dataset, provide appropriate attribution to the source.", "## Contact Information:\nFor additional information about the dataset or if you have questions, please contact @innerinetco" ]
[ "TAGS\n#safetensors #dataset-InnerI/CAI #license-llama3 #region-us \n", "# CAI Llama3 Consciousness Model", "## Overview:\n\nThe CAI Llama3 Consciousness Model is a specialized large language model designed to understand and respond to complex questions about consciousness. Trained on the \"Consciousness Benchmark Dataset\" containing 10,000 unique questions and responses, this model represents a significant step toward AI systems that can engage in meaningful discussions about consciousness, philosophy, neuroscience, and related fields.", "## Training Data:\n\nThe model is fine-tuned on a dataset focused on consciousness studies, covering a broad spectrum of topics including philosophy, quantum consciousness, neuroscience, and the explanatory gap. The dataset contains detailed responses to questions exploring various aspects of consciousness, providing a rich foundation for training.", "## Architecture:\n\nThis model is based on the Llama3 8b architecture, a state-of-the-art large language model known for its capability to understand and generate human-like text. The fine-tuning process has been tailored to enhance the model's understanding of consciousness-related topics, allowing it to provide insightful and coherent responses to complex questions.", "## Applications:\n\nThe CAI Llama3 Consciousness Model can be applied to various use cases, such as:", "## Research and Education: \n\nSupporting researchers and educators in the field of consciousness studies by offering insights and detailed responses.", "## Conversational AI:\n\nBuilding conversational agents capable of discussing consciousness, philosophy, and related topics.", "## Knowledge-based Systems:\n\nEnhancing AI systems with knowledge in consciousness studies to provide more comprehensive answers and explanations.", "## Performance and Features:\n\nThe model is designed to offer:\n\nHigh-Quality Responses: Capable of generating coherent and relevant responses to a wide range of consciousness-related questions.\n\nContextual Understanding: Able to understand the context and nuances in complex discussions about consciousness.\n\nFlexibility: Suitable for integration into various AI applications, from conversational agents to research tools.", "## Licensing and Attribution:\nBefore using this dataset, ensure compliance with any licensing agreements or usage restrictions. If you share or redistribute the dataset, provide appropriate attribution to the source.", "## Contact Information:\nFor additional information about the dataset or if you have questions, please contact @innerinetco" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilled-bert-finetuned-squadv2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilled-bert-finetuned-squadv2", "results": []}]}
momo345/distilled-bert-finetuned-squadv2
null
[ "transformers", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T22:12:39+00:00
[]
[]
TAGS #transformers #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
# distilled-bert-finetuned-squadv2 This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.0 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# distilled-bert-finetuned-squadv2\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.0\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n", "# distilled-bert-finetuned-squadv2\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.0\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
rubbrband/dynavisionXLAllInOneStylized_releaseV0610Bakedvae
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-04-26T22:12:51+00:00
[ "1910.09700" ]
[]
TAGS #diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
null
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-cartpole-01", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "500.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
stuvx/Reinforce-cartpole-01
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-26T22:13:10+00:00
[]
[]
TAGS #CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing CartPole-v1 This is a trained model of a Reinforce agent playing CartPole-v1 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
zubochenko/Plastic
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-26T22:15:42+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
null
# sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test-Q6_K-GGUF This model was converted to GGUF format from [`sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test`](https://huggingface.co/sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test-Q6_K-GGUF --model hansoldeco-beomi-llama3-open-ko-8b-64k-test.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test-Q6_K-GGUF --model hansoldeco-beomi-llama3-open-ko-8b-64k-test.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m hansoldeco-beomi-llama3-open-ko-8b-64k-test.Q6_K.gguf -n 128 ```
{"license": "other", "tags": ["generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "beomi/Llama-3-Open-Ko-8B", "model-index": [{"name": "beomi-llama3-8b-64k", "results": []}]}
sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test-Q6_K-GGUF
null
[ "gguf", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "base_model:beomi/Llama-3-Open-Ko-8B", "license:other", "region:us" ]
null
2024-04-26T22:16:28+00:00
[]
[]
TAGS #gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-beomi/Llama-3-Open-Ko-8B #license-other #region-us
# sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test-Q6_K-GGUF This model was converted to GGUF format from 'sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test-Q6_K-GGUF\nThis model was converted to GGUF format from 'sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-beomi/Llama-3-Open-Ko-8B #license-other #region-us \n", "# sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test-Q6_K-GGUF\nThis model was converted to GGUF format from 'sosoai/hansoldeco-beomi-llama3-open-ko-8b-64k-test' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) bigscience-small-testing - bnb 4bits - Model creator: https://huggingface.co/bigscience/ - Original model: https://huggingface.co/bigscience/bigscience-small-testing/ Original model description: --- language: - eng tags: - integration pipeline_tag: text-generation --- # BigScience - testing model This model aims to test the conversion between Megatron-LM and transformers. It is a small ```GPT-2```-like model that has been used to debug the script. Use it only for integration tests
{}
RichardErkhov/bigscience_-_bigscience-small-testing-4bits
null
[ "transformers", "safetensors", "bloom", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T22:17:46+00:00
[]
[]
TAGS #transformers #safetensors #bloom #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models bigscience-small-testing - bnb 4bits - Model creator: URL - Original model: URL Original model description: --- language: - eng tags: - integration pipeline_tag: text-generation --- # BigScience - testing model This model aims to test the conversion between Megatron-LM and transformers. It is a small -like model that has been used to debug the script. Use it only for integration tests
[ "# BigScience - testing model\n\nThis model aims to test the conversion between Megatron-LM and transformers. It is a small -like model that has been used to debug the script. Use it only for integration tests" ]
[ "TAGS\n#transformers #safetensors #bloom #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# BigScience - testing model\n\nThis model aims to test the conversion between Megatron-LM and transformers. It is a small -like model that has been used to debug the script. Use it only for integration tests" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) bigscience-small-testing - bnb 8bits - Model creator: https://huggingface.co/bigscience/ - Original model: https://huggingface.co/bigscience/bigscience-small-testing/ Original model description: --- language: - eng tags: - integration pipeline_tag: text-generation --- # BigScience - testing model This model aims to test the conversion between Megatron-LM and transformers. It is a small ```GPT-2```-like model that has been used to debug the script. Use it only for integration tests
{}
RichardErkhov/bigscience_-_bigscience-small-testing-8bits
null
[ "transformers", "safetensors", "bloom", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-26T22:18:02+00:00
[]
[]
TAGS #transformers #safetensors #bloom #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models bigscience-small-testing - bnb 8bits - Model creator: URL - Original model: URL Original model description: --- language: - eng tags: - integration pipeline_tag: text-generation --- # BigScience - testing model This model aims to test the conversion between Megatron-LM and transformers. It is a small -like model that has been used to debug the script. Use it only for integration tests
[ "# BigScience - testing model\n\nThis model aims to test the conversion between Megatron-LM and transformers. It is a small -like model that has been used to debug the script. Use it only for integration tests" ]
[ "TAGS\n#transformers #safetensors #bloom #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# BigScience - testing model\n\nThis model aims to test the conversion between Megatron-LM and transformers. It is a small -like model that has been used to debug the script. Use it only for integration tests" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
ivillar/Enlighten_Instruct
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-26T22:18:12+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
Mohamedshaaban2001/llama3
null
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T22:18:49+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
manoj-dhakal/llama-3-8b-PhiloSloppy
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T22:21:21+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4ac-seqsight_4096_512_46M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.5185 - F1 Score: 0.7454 - Accuracy: 0.7452 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.599 | 0.93 | 200 | 0.5564 | 0.7170 | 0.7182 | | 0.5605 | 1.87 | 400 | 0.5432 | 0.7308 | 0.7305 | | 0.5397 | 2.8 | 600 | 0.5353 | 0.7369 | 0.7372 | | 0.5353 | 3.74 | 800 | 0.5292 | 0.7344 | 0.7349 | | 0.531 | 4.67 | 1000 | 0.5277 | 0.7381 | 0.7378 | | 0.5205 | 5.61 | 1200 | 0.5246 | 0.7411 | 0.7408 | | 0.5215 | 6.54 | 1400 | 0.5258 | 0.7430 | 0.7428 | | 0.5078 | 7.48 | 1600 | 0.5239 | 0.7463 | 0.7460 | | 0.5193 | 8.41 | 1800 | 0.5190 | 0.7460 | 0.7457 | | 0.5081 | 9.35 | 2000 | 0.5184 | 0.7460 | 0.7457 | | 0.5053 | 10.28 | 2200 | 0.5193 | 0.7501 | 0.7499 | | 0.5067 | 11.21 | 2400 | 0.5202 | 0.7465 | 0.7463 | | 0.5003 | 12.15 | 2600 | 0.5292 | 0.7439 | 0.7443 | | 0.5007 | 13.08 | 2800 | 0.5184 | 0.7497 | 0.7496 | | 0.5008 | 14.02 | 3000 | 0.5148 | 0.7504 | 0.7501 | | 0.4962 | 14.95 | 3200 | 0.5123 | 0.7545 | 0.7543 | | 0.4916 | 15.89 | 3400 | 0.5205 | 0.7496 | 0.7496 | | 0.4917 | 16.82 | 3600 | 0.5191 | 0.7486 | 0.7487 | | 0.4927 | 17.76 | 3800 | 0.5300 | 0.7449 | 0.7455 | | 0.4924 | 18.69 | 4000 | 0.5143 | 0.7516 | 0.7513 | | 0.4872 | 19.63 | 4200 | 0.5176 | 0.7501 | 0.7501 | | 0.4915 | 20.56 | 4400 | 0.5116 | 0.7522 | 0.7519 | | 0.4856 | 21.5 | 4600 | 0.5190 | 0.7498 | 0.7501 | | 0.4856 | 22.43 | 4800 | 0.5082 | 0.7509 | 0.7507 | | 0.483 | 23.36 | 5000 | 0.5199 | 0.7540 | 0.7543 | | 0.4833 | 24.3 | 5200 | 0.5087 | 0.7522 | 0.7519 | | 0.4823 | 25.23 | 5400 | 0.5080 | 0.7525 | 0.7522 | | 0.4826 | 26.17 | 5600 | 0.5115 | 0.7565 | 0.7563 | | 0.4793 | 27.1 | 5800 | 0.5084 | 0.7583 | 0.7581 | | 0.4753 | 28.04 | 6000 | 0.5103 | 0.7580 | 0.7578 | | 0.4795 | 28.97 | 6200 | 0.5182 | 0.7546 | 0.7548 | | 0.481 | 29.91 | 6400 | 0.5099 | 0.7563 | 0.7560 | | 0.4786 | 30.84 | 6600 | 0.5196 | 0.7544 | 0.7548 | | 0.4752 | 31.78 | 6800 | 0.5099 | 0.7574 | 0.7572 | | 0.4743 | 32.71 | 7000 | 0.5098 | 0.7574 | 0.7572 | | 0.4708 | 33.64 | 7200 | 0.5150 | 0.7524 | 0.7525 | | 0.4747 | 34.58 | 7400 | 0.5112 | 0.7589 | 0.7587 | | 0.4744 | 35.51 | 7600 | 0.5124 | 0.7534 | 0.7534 | | 0.4692 | 36.45 | 7800 | 0.5114 | 0.7577 | 0.7575 | | 0.4738 | 37.38 | 8000 | 0.5204 | 0.7509 | 0.7513 | | 0.4689 | 38.32 | 8200 | 0.5135 | 0.7543 | 0.7543 | | 0.4699 | 39.25 | 8400 | 0.5112 | 0.7550 | 0.7548 | | 0.4727 | 40.19 | 8600 | 0.5111 | 0.7592 | 0.7589 | | 0.4694 | 41.12 | 8800 | 0.5102 | 0.7556 | 0.7554 | | 0.4697 | 42.06 | 9000 | 0.5115 | 0.7544 | 0.7543 | | 0.47 | 42.99 | 9200 | 0.5163 | 0.7533 | 0.7534 | | 0.4667 | 43.93 | 9400 | 0.5136 | 0.7552 | 0.7551 | | 0.4685 | 44.86 | 9600 | 0.5118 | 0.7550 | 0.7548 | | 0.4667 | 45.79 | 9800 | 0.5119 | 0.7544 | 0.7543 | | 0.4696 | 46.73 | 10000 | 0.5127 | 0.7547 | 0.7545 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_4096_512_46M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_4096_512_46M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T22:23:27+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H4ac-seqsight\_4096\_512\_46M-L1\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset. It achieves the following results on the evaluation set: * Loss: 0.5185 * F1 Score: 0.7454 * Accuracy: 0.7452 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4ac-seqsight_4096_512_46M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.5256 - F1 Score: 0.7445 - Accuracy: 0.7443 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5831 | 0.93 | 200 | 0.5387 | 0.7370 | 0.7367 | | 0.5368 | 1.87 | 400 | 0.5344 | 0.7379 | 0.7381 | | 0.523 | 2.8 | 600 | 0.5245 | 0.7480 | 0.7478 | | 0.5178 | 3.74 | 800 | 0.5188 | 0.7453 | 0.7455 | | 0.5129 | 4.67 | 1000 | 0.5176 | 0.7487 | 0.7484 | | 0.5003 | 5.61 | 1200 | 0.5172 | 0.7485 | 0.7484 | | 0.5013 | 6.54 | 1400 | 0.5165 | 0.7468 | 0.7469 | | 0.4858 | 7.48 | 1600 | 0.5154 | 0.7527 | 0.7528 | | 0.4953 | 8.41 | 1800 | 0.5081 | 0.7602 | 0.7601 | | 0.4826 | 9.35 | 2000 | 0.5038 | 0.7618 | 0.7616 | | 0.4774 | 10.28 | 2200 | 0.5071 | 0.7607 | 0.7607 | | 0.4788 | 11.21 | 2400 | 0.5108 | 0.7622 | 0.7625 | | 0.4695 | 12.15 | 2600 | 0.5230 | 0.7623 | 0.7628 | | 0.4673 | 13.08 | 2800 | 0.5081 | 0.7553 | 0.7551 | | 0.4657 | 14.02 | 3000 | 0.5106 | 0.7680 | 0.7680 | | 0.4619 | 14.95 | 3200 | 0.5067 | 0.7677 | 0.7674 | | 0.4549 | 15.89 | 3400 | 0.5063 | 0.7677 | 0.7674 | | 0.4549 | 16.82 | 3600 | 0.5193 | 0.7573 | 0.7575 | | 0.4533 | 17.76 | 3800 | 0.5346 | 0.7471 | 0.7484 | | 0.4506 | 18.69 | 4000 | 0.5126 | 0.7616 | 0.7613 | | 0.4457 | 19.63 | 4200 | 0.5069 | 0.7625 | 0.7622 | | 0.4477 | 20.56 | 4400 | 0.5066 | 0.7627 | 0.7625 | | 0.4407 | 21.5 | 4600 | 0.5125 | 0.7624 | 0.7625 | | 0.4395 | 22.43 | 4800 | 0.5104 | 0.7512 | 0.7510 | | 0.4348 | 23.36 | 5000 | 0.5139 | 0.7618 | 0.7616 | | 0.4358 | 24.3 | 5200 | 0.5075 | 0.7597 | 0.7595 | | 0.4325 | 25.23 | 5400 | 0.5063 | 0.7574 | 0.7572 | | 0.431 | 26.17 | 5600 | 0.5125 | 0.7583 | 0.7581 | | 0.4259 | 27.1 | 5800 | 0.5188 | 0.7468 | 0.7466 | | 0.4211 | 28.04 | 6000 | 0.5109 | 0.7551 | 0.7551 | | 0.4238 | 28.97 | 6200 | 0.5197 | 0.7591 | 0.7589 | | 0.423 | 29.91 | 6400 | 0.5212 | 0.7510 | 0.7507 | | 0.4227 | 30.84 | 6600 | 0.5272 | 0.7543 | 0.7545 | | 0.422 | 31.78 | 6800 | 0.5129 | 0.7543 | 0.7540 | | 0.4145 | 32.71 | 7000 | 0.5217 | 0.7533 | 0.7531 | | 0.4138 | 33.64 | 7200 | 0.5306 | 0.7500 | 0.7501 | | 0.4151 | 34.58 | 7400 | 0.5224 | 0.7519 | 0.7516 | | 0.4124 | 35.51 | 7600 | 0.5221 | 0.7454 | 0.7452 | | 0.4071 | 36.45 | 7800 | 0.5242 | 0.7528 | 0.7525 | | 0.4117 | 37.38 | 8000 | 0.5251 | 0.7523 | 0.7522 | | 0.405 | 38.32 | 8200 | 0.5278 | 0.7518 | 0.7516 | | 0.4068 | 39.25 | 8400 | 0.5228 | 0.7516 | 0.7513 | | 0.4062 | 40.19 | 8600 | 0.5246 | 0.7554 | 0.7551 | | 0.404 | 41.12 | 8800 | 0.5261 | 0.7525 | 0.7522 | | 0.4029 | 42.06 | 9000 | 0.5234 | 0.7548 | 0.7545 | | 0.3998 | 42.99 | 9200 | 0.5308 | 0.7483 | 0.7481 | | 0.3993 | 43.93 | 9400 | 0.5279 | 0.7495 | 0.7493 | | 0.4 | 44.86 | 9600 | 0.5292 | 0.7501 | 0.7499 | | 0.3987 | 45.79 | 9800 | 0.5275 | 0.7516 | 0.7513 | | 0.4019 | 46.73 | 10000 | 0.5278 | 0.7492 | 0.7490 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_4096_512_46M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_4096_512_46M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T22:23:32+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H4ac-seqsight\_4096\_512\_46M-L8\_f ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset. It achieves the following results on the evaluation set: * Loss: 0.5256 * F1 Score: 0.7445 * Accuracy: 0.7443 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H4ac-seqsight_4096_512_46M-L32_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset. It achieves the following results on the evaluation set: - Loss: 0.5206 - F1 Score: 0.7495 - Accuracy: 0.7493 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.5704 | 0.93 | 200 | 0.5291 | 0.7380 | 0.7378 | | 0.5282 | 1.87 | 400 | 0.5289 | 0.7413 | 0.7419 | | 0.5142 | 2.8 | 600 | 0.5150 | 0.7536 | 0.7534 | | 0.505 | 3.74 | 800 | 0.5092 | 0.7588 | 0.7587 | | 0.4973 | 4.67 | 1000 | 0.5087 | 0.7583 | 0.7581 | | 0.4803 | 5.61 | 1200 | 0.5097 | 0.7623 | 0.7622 | | 0.4798 | 6.54 | 1400 | 0.5004 | 0.7633 | 0.7630 | | 0.4615 | 7.48 | 1600 | 0.5067 | 0.7680 | 0.7677 | | 0.4646 | 8.41 | 1800 | 0.4996 | 0.7701 | 0.7698 | | 0.4512 | 9.35 | 2000 | 0.5005 | 0.7642 | 0.7639 | | 0.4406 | 10.28 | 2200 | 0.5116 | 0.7660 | 0.7657 | | 0.4378 | 11.21 | 2400 | 0.5163 | 0.7638 | 0.7636 | | 0.4252 | 12.15 | 2600 | 0.5131 | 0.7676 | 0.7674 | | 0.4194 | 13.08 | 2800 | 0.5110 | 0.7607 | 0.7604 | | 0.4133 | 14.02 | 3000 | 0.5230 | 0.7658 | 0.7657 | | 0.4042 | 14.95 | 3200 | 0.5172 | 0.7657 | 0.7654 | | 0.391 | 15.89 | 3400 | 0.5241 | 0.7680 | 0.7677 | | 0.388 | 16.82 | 3600 | 0.5437 | 0.7529 | 0.7528 | | 0.378 | 17.76 | 3800 | 0.5732 | 0.7431 | 0.7446 | | 0.3728 | 18.69 | 4000 | 0.5386 | 0.7637 | 0.7636 | | 0.367 | 19.63 | 4200 | 0.5414 | 0.7510 | 0.7507 | | 0.3592 | 20.56 | 4400 | 0.5526 | 0.7551 | 0.7548 | | 0.3505 | 21.5 | 4600 | 0.5667 | 0.7574 | 0.7578 | | 0.344 | 22.43 | 4800 | 0.5569 | 0.7492 | 0.7490 | | 0.3371 | 23.36 | 5000 | 0.5637 | 0.7560 | 0.7557 | | 0.3293 | 24.3 | 5200 | 0.5626 | 0.7513 | 0.7510 | | 0.3274 | 25.23 | 5400 | 0.5735 | 0.7421 | 0.7422 | | 0.316 | 26.17 | 5600 | 0.5841 | 0.7554 | 0.7551 | | 0.3147 | 27.1 | 5800 | 0.5948 | 0.7468 | 0.7472 | | 0.3048 | 28.04 | 6000 | 0.5895 | 0.7531 | 0.7528 | | 0.2992 | 28.97 | 6200 | 0.6038 | 0.7522 | 0.7519 | | 0.2961 | 29.91 | 6400 | 0.5871 | 0.7507 | 0.7504 | | 0.2974 | 30.84 | 6600 | 0.5992 | 0.7528 | 0.7525 | | 0.2919 | 31.78 | 6800 | 0.5873 | 0.7553 | 0.7551 | | 0.2827 | 32.71 | 7000 | 0.6055 | 0.7487 | 0.7487 | | 0.2813 | 33.64 | 7200 | 0.6482 | 0.7421 | 0.7431 | | 0.2786 | 34.58 | 7400 | 0.6069 | 0.7551 | 0.7548 | | 0.2711 | 35.51 | 7600 | 0.6172 | 0.7472 | 0.7469 | | 0.2633 | 36.45 | 7800 | 0.6349 | 0.7572 | 0.7569 | | 0.2656 | 37.38 | 8000 | 0.6379 | 0.7544 | 0.7543 | | 0.2616 | 38.32 | 8200 | 0.6498 | 0.7516 | 0.7513 | | 0.2585 | 39.25 | 8400 | 0.6421 | 0.7516 | 0.7513 | | 0.2562 | 40.19 | 8600 | 0.6397 | 0.7493 | 0.7490 | | 0.254 | 41.12 | 8800 | 0.6547 | 0.7545 | 0.7543 | | 0.2519 | 42.06 | 9000 | 0.6470 | 0.7545 | 0.7543 | | 0.2479 | 42.99 | 9200 | 0.6571 | 0.7498 | 0.7496 | | 0.2471 | 43.93 | 9400 | 0.6546 | 0.7424 | 0.7422 | | 0.246 | 44.86 | 9600 | 0.6531 | 0.7468 | 0.7466 | | 0.2457 | 45.79 | 9800 | 0.6563 | 0.7490 | 0.7487 | | 0.2374 | 46.73 | 10000 | 0.6618 | 0.7492 | 0.7490 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_4096_512_46M-L32_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_4096_512_46M-L32_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T22:23:42+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H4ac-seqsight\_4096\_512\_46M-L32\_f ============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset. It achieves the following results on the evaluation set: * Loss: 0.5206 * F1 Score: 0.7495 * Accuracy: 0.7493 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) bloom-560m - bnb 4bits - Model creator: https://huggingface.co/bigscience/ - Original model: https://huggingface.co/bigscience/bloom-560m/ Original model description: --- license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zhs - zht - zu pipeline_tag: text-generation --- <h1 style='text-align: center '>BLOOM LM</h1> <h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2> <h3 style='text-align: center '>Model Card</h3> <img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Version 1.0 / 26.May.2022 # Model Card for Bloom-560m <!-- Provide a quick summary of what the model is/does. --> ## Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Recommendations](#recommendations) 5. [Training Data](#training-data) 6. [Evaluation](#evaluation) 7. [Environmental Impact](#environmental-impact) 8. [Technical Specifications](#techincal-specifications) 9. [Citation](#citation) 10. [Glossary and Calculations](#glossary-and-calculations) 11. [More Information](#more-information) 12. [Model Card Authors](#model-card-authors) 13. [Model Card Contact](#model-card-contact) ## Model Details ### Model Description *This section provides information for anyone who wants to know about the model.* - **Developed by:** BigScience ([website](https://bigscience.huggingface.co)) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* - **Model Type:** Transformer-based Language Model - **Version:** 1.0.0 - **Languages:** Multiple; see [training data](#training-data) - **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license)) - **Release Date Estimate:** Monday, 11.July.2022 - **Funded by:** * The French government. * Hugging Face ([website](https://huggingface.co)). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* ## Uses *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### **Direct Use** - Text generation - Exploring characteristics of language generated by a language model - Examples: Cloze tests, counterfactuals, generations with reframings #### **Downstream Use** - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### **Out-of-scope Uses** Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.  The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: - Usage in biomedical domains, political and legal domains, or finance domains - Usage for evaluating or scoring individuals, such as for employment, education, or credit - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### **Misuse** Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes: - Spam generation - Disinformation and influence operations - Disparagement and defamation - Harassment and abuse - [Deception](#deception) - Unconsented impersonation and imitation - Unconsented surveillance - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license) ### Intended Users #### **Direct Users** - General Public - Researchers - Students - Educators - Engineers/developers - Non-commercial entities - Community advocates, including human and civil rights groups #### Indirect Users - Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use) - Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license) #### Others Affected (Parties Prenantes) - People and groups referred to by the LLM - People and groups exposed to outputs of, or decisions based on, the LLM - People and groups whose original work is included in the LLM ## Bias, Risks and Limitations *This section identifies foreseeable harms and misunderstandings.* Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information as if it were factual - Generate irrelevant or repetitive outputs ### Recommendations *This section provides information on warnings and potential mitigations.* - Indirect users should be made aware when the content they're working with is created by the LLM. - Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary. - Models pretrained with the LLM should include an updated Model Card. - Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. ## Training Data *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus). Training data includes: - 45 natural languages - 12 programming languages - In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.) #### **Languages** The pie chart shows the distribution of languages in training data. ![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true) **The following table shows the further distribution of Niger-Congo and Indic languages in the training data.** | Niger Congo | Percentage | | Indic | Percentage | |----------------|------------ |------ |-----------|------------| | Chi Tumbuka | 0.00002 | | Assamese | 0.01 | | Kikuyu | 0.00004 | | Odia | 0.04 | | Bambara | 0.00004 | | Gujarati | 0.04 | | Akan | 0.00007 | | Marathi | 0.05 | | Xitsonga | 0.00007 | | Punjabi | 0.05 | | Sesotho | 0.00007 | | Kannada | 0.06 | | Chi Chewa | 0.0001 | | Nepali | 0.07 | | Setswana | 0.0002 | | Telugu | 0.09 | | Northern Sotho | 0.0002 | | Malayalam | 0.10 | | Fon | 0.0002 | | Urdu | 0.10 | | Kirundi | 0.0003 | | Tamil | 0.20 | | Wolof | 0.0004 | | Bengali | 0.50 | | Kuganda | 0.0004 | | Hindi | 0.70 | | Chi Shona | 0.001 | | Isi Zulu | 0.001 | | Igbo | 0.001 | | Xhosa | 0.001 | | Kinyarwanda | 0.003 | | Yoruba | 0.006 | | Swahili | 0.02 | **The following table shows the distribution of programming languages.** | Extension | Language | Number of files | |----------------|------------|-----------------| | java | Java | 5,407,724 | | php | PHP | 4,942,186 | | cpp | C++ | 2,503,930 | | py | Python | 2,435,072 | | js | JavaScript | 1,905,518 | | cs | C# | 1,577,347 | | rb | Ruby | 6,78,413 | | cc | C++ | 443,054 | | hpp | C++ | 391,048 | | lua | Lua | 352,317 | | go | GO | 227,763 | | ts | TypeScript | 195,254 | | C | C | 134,537 | | scala | Scala | 92,052 | | hh | C++ | 67,161 | | H | C++ | 55,899 | | tsx | TypeScript | 33,107 | | rs | Rust | 29,693 | | phpt | PHP | 9,702 | | c++ | C++ | 1,342 | | h++ | C++ | 791 | | php3 | PHP | 540 | | phps | PHP | 270 | | php5 | PHP | 166 | | php4 | PHP | 29 | ## Evaluation *This section describes the evaluation protocols and provides the results.* ### Metrics *This section describes the different ways performance is calculated and why.* Includes: | Metric | Why chosen | |--------------------|--------------------------------------------------------------------| | [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training | | Cross Entropy [Loss](#loss) | Standard objective for language models. | And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_ ### Factors *This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* - Language, such as English or Yoruba - Domain, such as newswire or stories - Demographic characteristics, such as gender or nationality ### Results *Results are based on the [Factors](#factors) and [Metrics](#metrics).* **Train-time Evaluation:** As of 25.May.2022, 15:00 PST: - Training Loss: 2.0 - Validation Loss: 2.2 - Perplexity: 8.9 (More evaluation scores forthcoming at the end of model training.) ## Environmental Impact The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. **Estimated carbon emissions:** *(Forthcoming upon completion of training.)* **Estimated electricity usage:** *(Forthcoming upon completion of training.)* ## Technical Specifications *This section provides information for people who work on model development.* Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training. **Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)): * Decoder-only architecture * Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf)) * ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions * 559,214,592 parameters: * 256,901,120 embedding parameters * 24 layers, 16 attention heads * Hidden layers are 1024-dimensional * Sequence length of 2048 tokens (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization)) **Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)). **Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)). * Hardware: 384 A100 80GB GPUs (48 nodes): * Additional 32 A100 80GB GPUs (4 nodes) in reserve * 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links * CPU: AMD * CPU memory: 512GB per node * GPU memory: 640GB per node * Inter-node connect: Omni-Path Architecture (OPA) * NCCL-communications network: a fully dedicated subnet * Disc IO network: shared network with other types of nodes * Software: * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed)) * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed)) * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch)) * apex ([Github link](https://github.com/NVIDIA/apex)) ### **Training** Training logs: [Tensorboard link](https://huggingface.co/bigscience/tr11e-350M-logs) - Training throughput: About 150 TFLOPs per GPU - Number of epochs: 1 (*current target*) - Dates: - Started 11th March, 2022 11:42am PST - Ended 5th July, 2022 - Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes) - Server training location: Île-de-France, France ### **Tokenization** The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using: - A byte-level Byte Pair Encoding (BPE) algorithm - A simple pre-tokenization rule, no normalization - A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. ## Citation **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022 ## Glossary and Calculations *This section defines common terms and how metrics are calculated.* - <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. - <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. - <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/). - <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf). - <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf). - <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm). - <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf)) - <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. ## More Information ### Dataset Creation Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md ### Initial Results Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book ## Model Card Authors *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff ## Model Card Contact **Send Questions to:** [email protected]
{}
RichardErkhov/bigscience_-_bloom-560m-4bits
null
[ "transformers", "safetensors", "bloom", "text-generation", "arxiv:1909.08053", "arxiv:2110.02861", "arxiv:2108.12409", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T22:25:26+00:00
[ "1909.08053", "2110.02861", "2108.12409" ]
[]
TAGS #transformers #safetensors #bloom #text-generation #arxiv-1909.08053 #arxiv-2110.02861 #arxiv-2108.12409 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models bloom-560m - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: bigscience-bloom-rail-1.0 language: * ak * ar * as * bm * bn * ca * code * en * es * eu * fon * fr * gu * hi * id * ig * ki * kn * lg * ln * ml * mr * ne * nso * ny * or * pa * pt * rn * rw * sn * st * sw * ta * te * tn * ts * tum * tw * ur * vi * wo * xh * yo * zh * zhs * zht * zu pipeline\_tag: text-generation --- BLOOM LM ======== *BigScience Large Open-science Open-access Multilingual Language Model* ----------------------------------------------------------------------- ### Model Card ![](URL alt=) Version 1.0 / 26.May.2022 Model Card for Bloom-560m ========================= Table of Contents ----------------- 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Recommendations 5. Training Data 6. Evaluation 7. Environmental Impact 8. Technical Specifications 9. Citation 10. Glossary and Calculations 11. More Information 12. Model Card Authors 13. Model Card Contact Model Details ------------- ### Model Description *This section provides information for anyone who wants to know about the model.* * Developed by: BigScience (website) + All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* * Model Type: Transformer-based Language Model * Version: 1.0.0 * Languages: Multiple; see training data * License: RAIL License v1.0 (link) * Release Date Estimate: Monday, 11.July.2022 * Funded by: + The French government. + Hugging Face (website). + Organizations of contributors. *(Further breakdown of organizations forthcoming.)* Uses ---- *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### Direct Use * Text generation * Exploring characteristics of language generated by a language model + Examples: Cloze tests, counterfactuals, generations with reframings #### Downstream Use * Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### Out-of-scope Uses Using the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: * Usage in biomedical domains, political and legal domains, or finance domains * Usage for evaluating or scoring individuals, such as for employment, education, or credit * Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### Misuse Intentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes: * Spam generation * Disinformation and influence operations * Disparagement and defamation * Harassment and abuse * Deception * Unconsented impersonation and imitation * Unconsented surveillance * Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions ### Intended Users #### Direct Users * General Public * Researchers * Students * Educators * Engineers/developers * Non-commercial entities * Community advocates, including human and civil rights groups #### Indirect Users * Users of derivatives created by Direct Users, such as those using software with an intended use * Users of Derivatives of the Model, as described in the License #### Others Affected (Parties Prenantes) * People and groups referred to by the LLM * People and groups exposed to outputs of, or decisions based on, the LLM * People and groups whose original work is included in the LLM Bias, Risks and Limitations --------------------------- *This section identifies foreseeable harms and misunderstandings.* Model may: * Overrepresent some viewpoints and underrepresent others * Contain stereotypes * Contain personal information * Generate: + Hateful, abusive, or violent language + Discriminatory or prejudicial language + Content that may not be appropriate for all settings, including sexual content * Make errors, including producing incorrect information as if it were factual * Generate irrelevant or repetitive outputs ### Recommendations *This section provides information on warnings and potential mitigations.* * Indirect users should be made aware when the content they're working with is created by the LLM. * Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. * Models pretrained with the LLM should include an updated Model Card. * Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. Training Data ------------- *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* Details for each dataset are provided in individual Data Cards. Training data includes: * 45 natural languages * 12 programming languages * In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.) #### Languages The pie chart shows the distribution of languages in training data. !pie chart showing the distribution of languages in training data The following table shows the further distribution of Niger-Congo and Indic languages in the training data. The following table shows the distribution of programming languages. Extension: java, Language: Java, Number of files: 5,407,724 Extension: php, Language: PHP, Number of files: 4,942,186 Extension: cpp, Language: C++, Number of files: 2,503,930 Extension: py, Language: Python, Number of files: 2,435,072 Extension: js, Language: JavaScript, Number of files: 1,905,518 Extension: cs, Language: C#, Number of files: 1,577,347 Extension: rb, Language: Ruby, Number of files: 6,78,413 Extension: cc, Language: C++, Number of files: 443,054 Extension: hpp, Language: C++, Number of files: 391,048 Extension: lua, Language: Lua, Number of files: 352,317 Extension: go, Language: GO, Number of files: 227,763 Extension: ts, Language: TypeScript, Number of files: 195,254 Extension: C, Language: C, Number of files: 134,537 Extension: scala, Language: Scala, Number of files: 92,052 Extension: hh, Language: C++, Number of files: 67,161 Extension: H, Language: C++, Number of files: 55,899 Extension: tsx, Language: TypeScript, Number of files: 33,107 Extension: rs, Language: Rust, Number of files: 29,693 Extension: phpt, Language: PHP, Number of files: 9,702 Extension: c++, Language: C++, Number of files: 1,342 Extension: h++, Language: C++, Number of files: 791 Extension: php3, Language: PHP, Number of files: 540 Extension: phps, Language: PHP, Number of files: 270 Extension: php5, Language: PHP, Number of files: 166 Extension: php4, Language: PHP, Number of files: 29 Evaluation ---------- *This section describes the evaluation protocols and provides the results.* ### Metrics *This section describes the different ways performance is calculated and why.* Includes: And multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)* ### Factors *This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* * Language, such as English or Yoruba * Domain, such as newswire or stories * Demographic characteristics, such as gender or nationality ### Results *Results are based on the Factors and Metrics.* Train-time Evaluation: As of 25.May.2022, 15:00 PST: * Training Loss: 2.0 * Validation Loss: 2.2 * Perplexity: 8.9 (More evaluation scores forthcoming at the end of model training.) Environmental Impact -------------------- The training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. Estimated carbon emissions: *(Forthcoming upon completion of training.)* Estimated electricity usage: *(Forthcoming upon completion of training.)* Technical Specifications ------------------------ *This section provides information for people who work on model development.* Please see the BLOOM training README for full details on replicating training. Model Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code): * Decoder-only architecture * Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper) * ALiBI positional encodings (see paper), with GeLU activation functions * 559,214,592 parameters: + 256,901,120 embedding parameters + 24 layers, 16 attention heads + Hidden layers are 1024-dimensional + Sequence length of 2048 tokens (see BLOOM tokenizer, tokenizer description) Objective Function: Cross Entropy with mean reduction (see API documentation). Compute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement). * Hardware: 384 A100 80GB GPUs (48 nodes): + Additional 32 A100 80GB GPUs (4 nodes) in reserve + 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links + CPU: AMD + CPU memory: 512GB per node + GPU memory: 640GB per node + Inter-node connect: Omni-Path Architecture (OPA) + NCCL-communications network: a fully dedicated subnet + Disc IO network: shared network with other types of nodes * Software: + Megatron-DeepSpeed (Github link) + DeepSpeed (Github link) + PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link) + apex (Github link) ### Training Training logs: Tensorboard link * Training throughput: About 150 TFLOPs per GPU * Number of epochs: 1 (*current target*) * Dates: + Started 11th March, 2022 11:42am PST + Ended 5th July, 2022 * Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes) * Server training location: Île-de-France, France ### Tokenization The BLOOM tokenizer (link) is a learned subword tokenizer trained using: * A byte-level Byte Pair Encoding (BPE) algorithm * A simple pre-tokenization rule, no normalization * A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. Cite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022 Glossary and Calculations ------------------------- *This section defines common terms and how metrics are calculated.* * Loss: A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. * Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. * High-stakes settings: Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed Artificial Intelligence (AI) Act. * Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act. * Human rights: Includes those rights defined in the Universal Declaration of Human Rights. * Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as "personal data" in the European Union's General Data Protection Regulation; and "personal information" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law. * Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1) * Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. More Information ---------------- ### Dataset Creation Blog post detailing the design choices during the dataset creation: URL ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL More details on the architecture/optimizer: URL Blog post on the hardware/engineering side: URL Details on the distributed setup used for the training: URL Tensorboard updated during the training: URL Insights on how to approach training, negative results: URL Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL ### Initial Results Initial prompting experiments using interim checkpoints: URL Model Card Authors ------------------ *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff Model Card Contact ------------------ Send Questions to: bigscience-contact@URL
[ "### Model Card\n\n\n![](URL alt=)\nVersion 1.0 / 26.May.2022\n\n\nModel Card for Bloom-560m\n=========================\n\n\nTable of Contents\n-----------------\n\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Recommendations\n5. Training Data\n6. Evaluation\n7. Environmental Impact\n8. Technical Specifications\n9. Citation\n10. Glossary and Calculations\n11. More Information\n12. Model Card Authors\n13. Model Card Contact\n\n\nModel Details\n-------------", "### Model Description\n\n\n*This section provides information for anyone who wants to know about the model.*\n\n\n* Developed by: BigScience (website)\n\n\n\t+ All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*\n* Model Type: Transformer-based Language Model\n* Version: 1.0.0\n* Languages: Multiple; see training data\n* License: RAIL License v1.0 (link)\n* Release Date Estimate: Monday, 11.July.2022\n* Funded by:\n\n\n\t+ The French government.\n\t+ Hugging Face (website).\n\t+ Organizations of contributors. *(Further breakdown of organizations forthcoming.)*\n\n\nUses\n----\n\n\n*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.\nIt provides information for anyone considering using the model or who is affected by the model.*", "### Intended Use\n\n\nThis model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.", "#### Direct Use\n\n\n* Text generation\n* Exploring characteristics of language generated by a language model\n\n\n\t+ Examples: Cloze tests, counterfactuals, generations with reframings", "#### Downstream Use\n\n\n* Tasks that leverage language models include: Information Extraction, Question Answering, Summarization", "### Misuse and Out-of-scope Use\n\n\n*This section addresses what users ought not do with the model.*\n\n\nSee the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.", "#### Out-of-scope Uses\n\n\nUsing the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.", "##### Out-of-scope Uses Include:\n\n\n* Usage in biomedical domains, political and legal domains, or finance domains\n* Usage for evaluating or scoring individuals, such as for employment, education, or credit\n* Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct", "#### Misuse\n\n\nIntentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:\n\n\n* Spam generation\n* Disinformation and influence operations\n* Disparagement and defamation\n* Harassment and abuse\n* Deception\n* Unconsented impersonation and imitation\n* Unconsented surveillance\n* Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions", "### Intended Users", "#### Direct Users\n\n\n* General Public\n* Researchers\n* Students\n* Educators\n* Engineers/developers\n* Non-commercial entities\n* Community advocates, including human and civil rights groups", "#### Indirect Users\n\n\n* Users of derivatives created by Direct Users, such as those using software with an intended use\n* Users of Derivatives of the Model, as described in the License", "#### Others Affected (Parties Prenantes)\n\n\n* People and groups referred to by the LLM\n* People and groups exposed to outputs of, or decisions based on, the LLM\n* People and groups whose original work is included in the LLM\n\n\nBias, Risks and Limitations\n---------------------------\n\n\n*This section identifies foreseeable harms and misunderstandings.*\n\n\nModel may:\n\n\n* Overrepresent some viewpoints and underrepresent others\n* Contain stereotypes\n* Contain personal information\n* Generate:\n\n\n\t+ Hateful, abusive, or violent language\n\t+ Discriminatory or prejudicial language\n\t+ Content that may not be appropriate for all settings, including sexual content\n* Make errors, including producing incorrect information as if it were factual\n* Generate irrelevant or repetitive outputs", "### Recommendations\n\n\n*This section provides information on warnings and potential mitigations.*\n\n\n* Indirect users should be made aware when the content they're working with is created by the LLM.\n* Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.\n* Models pretrained with the LLM should include an updated Model Card.\n* Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.\n\n\nTraining Data\n-------------\n\n\n*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*\n\n\nDetails for each dataset are provided in individual Data Cards.\n\n\nTraining data includes:\n\n\n* 45 natural languages\n* 12 programming languages\n* In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)", "#### Languages\n\n\nThe pie chart shows the distribution of languages in training data.\n\n\n!pie chart showing the distribution of languages in training data\n\n\nThe following table shows the further distribution of Niger-Congo and Indic languages in the training data.\n\n\n\nThe following table shows the distribution of programming languages.\n\n\nExtension: java, Language: Java, Number of files: 5,407,724\nExtension: php, Language: PHP, Number of files: 4,942,186\nExtension: cpp, Language: C++, Number of files: 2,503,930\nExtension: py, Language: Python, Number of files: 2,435,072\nExtension: js, Language: JavaScript, Number of files: 1,905,518\nExtension: cs, Language: C#, Number of files: 1,577,347\nExtension: rb, Language: Ruby, Number of files: 6,78,413\nExtension: cc, Language: C++, Number of files: 443,054\nExtension: hpp, Language: C++, Number of files: 391,048\nExtension: lua, Language: Lua, Number of files: 352,317\nExtension: go, Language: GO, Number of files: 227,763\nExtension: ts, Language: TypeScript, Number of files: 195,254\nExtension: C, Language: C, Number of files: 134,537\nExtension: scala, Language: Scala, Number of files: 92,052\nExtension: hh, Language: C++, Number of files: 67,161\nExtension: H, Language: C++, Number of files: 55,899\nExtension: tsx, Language: TypeScript, Number of files: 33,107\nExtension: rs, Language: Rust, Number of files: 29,693\nExtension: phpt, Language: PHP, Number of files: 9,702\nExtension: c++, Language: C++, Number of files: 1,342\nExtension: h++, Language: C++, Number of files: 791\nExtension: php3, Language: PHP, Number of files: 540\nExtension: phps, Language: PHP, Number of files: 270\nExtension: php5, Language: PHP, Number of files: 166\nExtension: php4, Language: PHP, Number of files: 29\n\n\nEvaluation\n----------\n\n\n*This section describes the evaluation protocols and provides the results.*", "### Metrics\n\n\n*This section describes the different ways performance is calculated and why.*\n\n\nIncludes:\n\n\n\nAnd multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)*", "### Factors\n\n\n*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*\n\n\n* Language, such as English or Yoruba\n* Domain, such as newswire or stories\n* Demographic characteristics, such as gender or nationality", "### Results\n\n\n*Results are based on the Factors and Metrics.*\n\n\nTrain-time Evaluation:\n\n\nAs of 25.May.2022, 15:00 PST:\n\n\n* Training Loss: 2.0\n* Validation Loss: 2.2\n* Perplexity: 8.9\n\n\n(More evaluation scores forthcoming at the end of model training.)\n\n\nEnvironmental Impact\n--------------------\n\n\nThe training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.\n\n\nEstimated carbon emissions: *(Forthcoming upon completion of training.)*\n\n\nEstimated electricity usage: *(Forthcoming upon completion of training.)*\n\n\nTechnical Specifications\n------------------------\n\n\n*This section provides information for people who work on model development.*\n\n\nPlease see the BLOOM training README for full details on replicating training.\n\n\nModel Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code):\n\n\n* Decoder-only architecture\n* Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper)\n* ALiBI positional encodings (see paper), with GeLU activation functions\n* 559,214,592 parameters:\n\n\n\t+ 256,901,120 embedding parameters\n\t+ 24 layers, 16 attention heads\n\t+ Hidden layers are 1024-dimensional\n\t+ Sequence length of 2048 tokens (see BLOOM tokenizer, tokenizer description)\n\n\nObjective Function: Cross Entropy with mean reduction (see API documentation).\n\n\nCompute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement).\n\n\n* Hardware: 384 A100 80GB GPUs (48 nodes):\n\n\n\t+ Additional 32 A100 80GB GPUs (4 nodes) in reserve\n\t+ 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links\n\t+ CPU: AMD\n\t+ CPU memory: 512GB per node\n\t+ GPU memory: 640GB per node\n\t+ Inter-node connect: Omni-Path Architecture (OPA)\n\t+ NCCL-communications network: a fully dedicated subnet\n\t+ Disc IO network: shared network with other types of nodes\n* Software:\n\n\n\t+ Megatron-DeepSpeed (Github link)\n\t+ DeepSpeed (Github link)\n\t+ PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link)\n\t+ apex (Github link)", "### Training\n\n\nTraining logs: Tensorboard link\n\n\n* Training throughput: About 150 TFLOPs per GPU\n* Number of epochs: 1 (*current target*)\n* Dates:\n\n\n\t+ Started 11th March, 2022 11:42am PST\n\t+ Ended 5th July, 2022\n* Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)\n* Server training location: Île-de-France, France", "### Tokenization\n\n\nThe BLOOM tokenizer (link) is a learned subword tokenizer trained using:\n\n\n* A byte-level Byte Pair Encoding (BPE) algorithm\n* A simple pre-tokenization rule, no normalization\n* A vocabulary size of 250,680\n\n\nIt was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.\n\n\nCite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022\n\n\nGlossary and Calculations\n-------------------------\n\n\n*This section defines common terms and how metrics are calculated.*\n\n\n* Loss: A calculation of the difference between what the model has learned and what the data shows (\"groundtruth\"). The lower the loss, the better. The training process aims to minimize the loss.\n* Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.\n* High-stakes settings: Such as those identified as \"high-risk AI systems\" and \"unacceptable risk AI systems\" in the European Union's proposed Artificial Intelligence (AI) Act.\n* Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act.\n* Human rights: Includes those rights defined in the Universal Declaration of Human Rights.\n* Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as \"personal data\" in the European Union's General Data Protection Regulation; and \"personal information\" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law.\n* Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1)\n* Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.\n\n\nMore Information\n----------------", "### Dataset Creation\n\n\nBlog post detailing the design choices during the dataset creation: URL", "### Technical Specifications\n\n\nBlog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL\n\n\nMore details on the architecture/optimizer: URL\n\n\nBlog post on the hardware/engineering side: URL\n\n\nDetails on the distributed setup used for the training: URL\n\n\nTensorboard updated during the training: URL\n\n\nInsights on how to approach training, negative results: URL\n\n\nDetails on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL", "### Initial Results\n\n\nInitial prompting experiments using interim checkpoints: URL\n\n\nModel Card Authors\n------------------\n\n\n*Ordered roughly chronologically and by amount of time spent.*\n\n\nMargaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff\n\n\nModel Card Contact\n------------------\n\n\nSend Questions to: bigscience-contact@URL" ]
[ "TAGS\n#transformers #safetensors #bloom #text-generation #arxiv-1909.08053 #arxiv-2110.02861 #arxiv-2108.12409 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "### Model Card\n\n\n![](URL alt=)\nVersion 1.0 / 26.May.2022\n\n\nModel Card for Bloom-560m\n=========================\n\n\nTable of Contents\n-----------------\n\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Recommendations\n5. Training Data\n6. Evaluation\n7. Environmental Impact\n8. Technical Specifications\n9. Citation\n10. Glossary and Calculations\n11. More Information\n12. Model Card Authors\n13. Model Card Contact\n\n\nModel Details\n-------------", "### Model Description\n\n\n*This section provides information for anyone who wants to know about the model.*\n\n\n* Developed by: BigScience (website)\n\n\n\t+ All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*\n* Model Type: Transformer-based Language Model\n* Version: 1.0.0\n* Languages: Multiple; see training data\n* License: RAIL License v1.0 (link)\n* Release Date Estimate: Monday, 11.July.2022\n* Funded by:\n\n\n\t+ The French government.\n\t+ Hugging Face (website).\n\t+ Organizations of contributors. *(Further breakdown of organizations forthcoming.)*\n\n\nUses\n----\n\n\n*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.\nIt provides information for anyone considering using the model or who is affected by the model.*", "### Intended Use\n\n\nThis model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.", "#### Direct Use\n\n\n* Text generation\n* Exploring characteristics of language generated by a language model\n\n\n\t+ Examples: Cloze tests, counterfactuals, generations with reframings", "#### Downstream Use\n\n\n* Tasks that leverage language models include: Information Extraction, Question Answering, Summarization", "### Misuse and Out-of-scope Use\n\n\n*This section addresses what users ought not do with the model.*\n\n\nSee the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.", "#### Out-of-scope Uses\n\n\nUsing the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.", "##### Out-of-scope Uses Include:\n\n\n* Usage in biomedical domains, political and legal domains, or finance domains\n* Usage for evaluating or scoring individuals, such as for employment, education, or credit\n* Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct", "#### Misuse\n\n\nIntentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:\n\n\n* Spam generation\n* Disinformation and influence operations\n* Disparagement and defamation\n* Harassment and abuse\n* Deception\n* Unconsented impersonation and imitation\n* Unconsented surveillance\n* Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions", "### Intended Users", "#### Direct Users\n\n\n* General Public\n* Researchers\n* Students\n* Educators\n* Engineers/developers\n* Non-commercial entities\n* Community advocates, including human and civil rights groups", "#### Indirect Users\n\n\n* Users of derivatives created by Direct Users, such as those using software with an intended use\n* Users of Derivatives of the Model, as described in the License", "#### Others Affected (Parties Prenantes)\n\n\n* People and groups referred to by the LLM\n* People and groups exposed to outputs of, or decisions based on, the LLM\n* People and groups whose original work is included in the LLM\n\n\nBias, Risks and Limitations\n---------------------------\n\n\n*This section identifies foreseeable harms and misunderstandings.*\n\n\nModel may:\n\n\n* Overrepresent some viewpoints and underrepresent others\n* Contain stereotypes\n* Contain personal information\n* Generate:\n\n\n\t+ Hateful, abusive, or violent language\n\t+ Discriminatory or prejudicial language\n\t+ Content that may not be appropriate for all settings, including sexual content\n* Make errors, including producing incorrect information as if it were factual\n* Generate irrelevant or repetitive outputs", "### Recommendations\n\n\n*This section provides information on warnings and potential mitigations.*\n\n\n* Indirect users should be made aware when the content they're working with is created by the LLM.\n* Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.\n* Models pretrained with the LLM should include an updated Model Card.\n* Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.\n\n\nTraining Data\n-------------\n\n\n*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*\n\n\nDetails for each dataset are provided in individual Data Cards.\n\n\nTraining data includes:\n\n\n* 45 natural languages\n* 12 programming languages\n* In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)", "#### Languages\n\n\nThe pie chart shows the distribution of languages in training data.\n\n\n!pie chart showing the distribution of languages in training data\n\n\nThe following table shows the further distribution of Niger-Congo and Indic languages in the training data.\n\n\n\nThe following table shows the distribution of programming languages.\n\n\nExtension: java, Language: Java, Number of files: 5,407,724\nExtension: php, Language: PHP, Number of files: 4,942,186\nExtension: cpp, Language: C++, Number of files: 2,503,930\nExtension: py, Language: Python, Number of files: 2,435,072\nExtension: js, Language: JavaScript, Number of files: 1,905,518\nExtension: cs, Language: C#, Number of files: 1,577,347\nExtension: rb, Language: Ruby, Number of files: 6,78,413\nExtension: cc, Language: C++, Number of files: 443,054\nExtension: hpp, Language: C++, Number of files: 391,048\nExtension: lua, Language: Lua, Number of files: 352,317\nExtension: go, Language: GO, Number of files: 227,763\nExtension: ts, Language: TypeScript, Number of files: 195,254\nExtension: C, Language: C, Number of files: 134,537\nExtension: scala, Language: Scala, Number of files: 92,052\nExtension: hh, Language: C++, Number of files: 67,161\nExtension: H, Language: C++, Number of files: 55,899\nExtension: tsx, Language: TypeScript, Number of files: 33,107\nExtension: rs, Language: Rust, Number of files: 29,693\nExtension: phpt, Language: PHP, Number of files: 9,702\nExtension: c++, Language: C++, Number of files: 1,342\nExtension: h++, Language: C++, Number of files: 791\nExtension: php3, Language: PHP, Number of files: 540\nExtension: phps, Language: PHP, Number of files: 270\nExtension: php5, Language: PHP, Number of files: 166\nExtension: php4, Language: PHP, Number of files: 29\n\n\nEvaluation\n----------\n\n\n*This section describes the evaluation protocols and provides the results.*", "### Metrics\n\n\n*This section describes the different ways performance is calculated and why.*\n\n\nIncludes:\n\n\n\nAnd multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)*", "### Factors\n\n\n*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*\n\n\n* Language, such as English or Yoruba\n* Domain, such as newswire or stories\n* Demographic characteristics, such as gender or nationality", "### Results\n\n\n*Results are based on the Factors and Metrics.*\n\n\nTrain-time Evaluation:\n\n\nAs of 25.May.2022, 15:00 PST:\n\n\n* Training Loss: 2.0\n* Validation Loss: 2.2\n* Perplexity: 8.9\n\n\n(More evaluation scores forthcoming at the end of model training.)\n\n\nEnvironmental Impact\n--------------------\n\n\nThe training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.\n\n\nEstimated carbon emissions: *(Forthcoming upon completion of training.)*\n\n\nEstimated electricity usage: *(Forthcoming upon completion of training.)*\n\n\nTechnical Specifications\n------------------------\n\n\n*This section provides information for people who work on model development.*\n\n\nPlease see the BLOOM training README for full details on replicating training.\n\n\nModel Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code):\n\n\n* Decoder-only architecture\n* Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper)\n* ALiBI positional encodings (see paper), with GeLU activation functions\n* 559,214,592 parameters:\n\n\n\t+ 256,901,120 embedding parameters\n\t+ 24 layers, 16 attention heads\n\t+ Hidden layers are 1024-dimensional\n\t+ Sequence length of 2048 tokens (see BLOOM tokenizer, tokenizer description)\n\n\nObjective Function: Cross Entropy with mean reduction (see API documentation).\n\n\nCompute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement).\n\n\n* Hardware: 384 A100 80GB GPUs (48 nodes):\n\n\n\t+ Additional 32 A100 80GB GPUs (4 nodes) in reserve\n\t+ 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links\n\t+ CPU: AMD\n\t+ CPU memory: 512GB per node\n\t+ GPU memory: 640GB per node\n\t+ Inter-node connect: Omni-Path Architecture (OPA)\n\t+ NCCL-communications network: a fully dedicated subnet\n\t+ Disc IO network: shared network with other types of nodes\n* Software:\n\n\n\t+ Megatron-DeepSpeed (Github link)\n\t+ DeepSpeed (Github link)\n\t+ PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link)\n\t+ apex (Github link)", "### Training\n\n\nTraining logs: Tensorboard link\n\n\n* Training throughput: About 150 TFLOPs per GPU\n* Number of epochs: 1 (*current target*)\n* Dates:\n\n\n\t+ Started 11th March, 2022 11:42am PST\n\t+ Ended 5th July, 2022\n* Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)\n* Server training location: Île-de-France, France", "### Tokenization\n\n\nThe BLOOM tokenizer (link) is a learned subword tokenizer trained using:\n\n\n* A byte-level Byte Pair Encoding (BPE) algorithm\n* A simple pre-tokenization rule, no normalization\n* A vocabulary size of 250,680\n\n\nIt was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.\n\n\nCite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022\n\n\nGlossary and Calculations\n-------------------------\n\n\n*This section defines common terms and how metrics are calculated.*\n\n\n* Loss: A calculation of the difference between what the model has learned and what the data shows (\"groundtruth\"). The lower the loss, the better. The training process aims to minimize the loss.\n* Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.\n* High-stakes settings: Such as those identified as \"high-risk AI systems\" and \"unacceptable risk AI systems\" in the European Union's proposed Artificial Intelligence (AI) Act.\n* Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act.\n* Human rights: Includes those rights defined in the Universal Declaration of Human Rights.\n* Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as \"personal data\" in the European Union's General Data Protection Regulation; and \"personal information\" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law.\n* Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1)\n* Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.\n\n\nMore Information\n----------------", "### Dataset Creation\n\n\nBlog post detailing the design choices during the dataset creation: URL", "### Technical Specifications\n\n\nBlog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL\n\n\nMore details on the architecture/optimizer: URL\n\n\nBlog post on the hardware/engineering side: URL\n\n\nDetails on the distributed setup used for the training: URL\n\n\nTensorboard updated during the training: URL\n\n\nInsights on how to approach training, negative results: URL\n\n\nDetails on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL", "### Initial Results\n\n\nInitial prompting experiments using interim checkpoints: URL\n\n\nModel Card Authors\n------------------\n\n\n*Ordered roughly chronologically and by amount of time spent.*\n\n\nMargaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff\n\n\nModel Card Contact\n------------------\n\n\nSend Questions to: bigscience-contact@URL" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "aaditya/OpenBioLLM-Llama3-8B", "quantized_by": "mradermacher"}
mradermacher/OpenBioLLM-Llama3-8B-i1-GGUF
null
[ "transformers", "gguf", "llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation", "en", "base_model:aaditya/OpenBioLLM-Llama3-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-04-26T22:25:50+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama-3 #llama #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #base_model-aaditya/OpenBioLLM-Llama3-8B #license-llama3 #endpoints_compatible #region-us
About ----- weighted/imatrix quants of URL static quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #llama-3 #llama #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #base_model-aaditya/OpenBioLLM-Llama3-8B #license-llama3 #endpoints_compatible #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) bloom-560m - bnb 8bits - Model creator: https://huggingface.co/bigscience/ - Original model: https://huggingface.co/bigscience/bloom-560m/ Original model description: --- license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zhs - zht - zu pipeline_tag: text-generation --- <h1 style='text-align: center '>BLOOM LM</h1> <h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2> <h3 style='text-align: center '>Model Card</h3> <img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Version 1.0 / 26.May.2022 # Model Card for Bloom-560m <!-- Provide a quick summary of what the model is/does. --> ## Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Recommendations](#recommendations) 5. [Training Data](#training-data) 6. [Evaluation](#evaluation) 7. [Environmental Impact](#environmental-impact) 8. [Technical Specifications](#techincal-specifications) 9. [Citation](#citation) 10. [Glossary and Calculations](#glossary-and-calculations) 11. [More Information](#more-information) 12. [Model Card Authors](#model-card-authors) 13. [Model Card Contact](#model-card-contact) ## Model Details ### Model Description *This section provides information for anyone who wants to know about the model.* - **Developed by:** BigScience ([website](https://bigscience.huggingface.co)) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* - **Model Type:** Transformer-based Language Model - **Version:** 1.0.0 - **Languages:** Multiple; see [training data](#training-data) - **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license)) - **Release Date Estimate:** Monday, 11.July.2022 - **Funded by:** * The French government. * Hugging Face ([website](https://huggingface.co)). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* ## Uses *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### **Direct Use** - Text generation - Exploring characteristics of language generated by a language model - Examples: Cloze tests, counterfactuals, generations with reframings #### **Downstream Use** - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### **Out-of-scope Uses** Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.  The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: - Usage in biomedical domains, political and legal domains, or finance domains - Usage for evaluating or scoring individuals, such as for employment, education, or credit - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### **Misuse** Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes: - Spam generation - Disinformation and influence operations - Disparagement and defamation - Harassment and abuse - [Deception](#deception) - Unconsented impersonation and imitation - Unconsented surveillance - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license) ### Intended Users #### **Direct Users** - General Public - Researchers - Students - Educators - Engineers/developers - Non-commercial entities - Community advocates, including human and civil rights groups #### Indirect Users - Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use) - Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license) #### Others Affected (Parties Prenantes) - People and groups referred to by the LLM - People and groups exposed to outputs of, or decisions based on, the LLM - People and groups whose original work is included in the LLM ## Bias, Risks and Limitations *This section identifies foreseeable harms and misunderstandings.* Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information as if it were factual - Generate irrelevant or repetitive outputs ### Recommendations *This section provides information on warnings and potential mitigations.* - Indirect users should be made aware when the content they're working with is created by the LLM. - Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary. - Models pretrained with the LLM should include an updated Model Card. - Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. ## Training Data *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus). Training data includes: - 45 natural languages - 12 programming languages - In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.) #### **Languages** The pie chart shows the distribution of languages in training data. ![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true) **The following table shows the further distribution of Niger-Congo and Indic languages in the training data.** | Niger Congo | Percentage | | Indic | Percentage | |----------------|------------ |------ |-----------|------------| | Chi Tumbuka | 0.00002 | | Assamese | 0.01 | | Kikuyu | 0.00004 | | Odia | 0.04 | | Bambara | 0.00004 | | Gujarati | 0.04 | | Akan | 0.00007 | | Marathi | 0.05 | | Xitsonga | 0.00007 | | Punjabi | 0.05 | | Sesotho | 0.00007 | | Kannada | 0.06 | | Chi Chewa | 0.0001 | | Nepali | 0.07 | | Setswana | 0.0002 | | Telugu | 0.09 | | Northern Sotho | 0.0002 | | Malayalam | 0.10 | | Fon | 0.0002 | | Urdu | 0.10 | | Kirundi | 0.0003 | | Tamil | 0.20 | | Wolof | 0.0004 | | Bengali | 0.50 | | Kuganda | 0.0004 | | Hindi | 0.70 | | Chi Shona | 0.001 | | Isi Zulu | 0.001 | | Igbo | 0.001 | | Xhosa | 0.001 | | Kinyarwanda | 0.003 | | Yoruba | 0.006 | | Swahili | 0.02 | **The following table shows the distribution of programming languages.** | Extension | Language | Number of files | |----------------|------------|-----------------| | java | Java | 5,407,724 | | php | PHP | 4,942,186 | | cpp | C++ | 2,503,930 | | py | Python | 2,435,072 | | js | JavaScript | 1,905,518 | | cs | C# | 1,577,347 | | rb | Ruby | 6,78,413 | | cc | C++ | 443,054 | | hpp | C++ | 391,048 | | lua | Lua | 352,317 | | go | GO | 227,763 | | ts | TypeScript | 195,254 | | C | C | 134,537 | | scala | Scala | 92,052 | | hh | C++ | 67,161 | | H | C++ | 55,899 | | tsx | TypeScript | 33,107 | | rs | Rust | 29,693 | | phpt | PHP | 9,702 | | c++ | C++ | 1,342 | | h++ | C++ | 791 | | php3 | PHP | 540 | | phps | PHP | 270 | | php5 | PHP | 166 | | php4 | PHP | 29 | ## Evaluation *This section describes the evaluation protocols and provides the results.* ### Metrics *This section describes the different ways performance is calculated and why.* Includes: | Metric | Why chosen | |--------------------|--------------------------------------------------------------------| | [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training | | Cross Entropy [Loss](#loss) | Standard objective for language models. | And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_ ### Factors *This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* - Language, such as English or Yoruba - Domain, such as newswire or stories - Demographic characteristics, such as gender or nationality ### Results *Results are based on the [Factors](#factors) and [Metrics](#metrics).* **Train-time Evaluation:** As of 25.May.2022, 15:00 PST: - Training Loss: 2.0 - Validation Loss: 2.2 - Perplexity: 8.9 (More evaluation scores forthcoming at the end of model training.) ## Environmental Impact The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. **Estimated carbon emissions:** *(Forthcoming upon completion of training.)* **Estimated electricity usage:** *(Forthcoming upon completion of training.)* ## Technical Specifications *This section provides information for people who work on model development.* Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training. **Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)): * Decoder-only architecture * Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf)) * ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions * 559,214,592 parameters: * 256,901,120 embedding parameters * 24 layers, 16 attention heads * Hidden layers are 1024-dimensional * Sequence length of 2048 tokens (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization)) **Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)). **Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)). * Hardware: 384 A100 80GB GPUs (48 nodes): * Additional 32 A100 80GB GPUs (4 nodes) in reserve * 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links * CPU: AMD * CPU memory: 512GB per node * GPU memory: 640GB per node * Inter-node connect: Omni-Path Architecture (OPA) * NCCL-communications network: a fully dedicated subnet * Disc IO network: shared network with other types of nodes * Software: * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed)) * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed)) * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch)) * apex ([Github link](https://github.com/NVIDIA/apex)) ### **Training** Training logs: [Tensorboard link](https://huggingface.co/bigscience/tr11e-350M-logs) - Training throughput: About 150 TFLOPs per GPU - Number of epochs: 1 (*current target*) - Dates: - Started 11th March, 2022 11:42am PST - Ended 5th July, 2022 - Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes) - Server training location: Île-de-France, France ### **Tokenization** The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using: - A byte-level Byte Pair Encoding (BPE) algorithm - A simple pre-tokenization rule, no normalization - A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. ## Citation **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022 ## Glossary and Calculations *This section defines common terms and how metrics are calculated.* - <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. - <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. - <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/). - <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf). - <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf). - <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm). - <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf)) - <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. ## More Information ### Dataset Creation Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md ### Initial Results Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book ## Model Card Authors *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff ## Model Card Contact **Send Questions to:** [email protected]
{}
RichardErkhov/bigscience_-_bloom-560m-8bits
null
[ "transformers", "safetensors", "bloom", "text-generation", "arxiv:1909.08053", "arxiv:2110.02861", "arxiv:2108.12409", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-26T22:26:26+00:00
[ "1909.08053", "2110.02861", "2108.12409" ]
[]
TAGS #transformers #safetensors #bloom #text-generation #arxiv-1909.08053 #arxiv-2110.02861 #arxiv-2108.12409 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models bloom-560m - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: bigscience-bloom-rail-1.0 language: * ak * ar * as * bm * bn * ca * code * en * es * eu * fon * fr * gu * hi * id * ig * ki * kn * lg * ln * ml * mr * ne * nso * ny * or * pa * pt * rn * rw * sn * st * sw * ta * te * tn * ts * tum * tw * ur * vi * wo * xh * yo * zh * zhs * zht * zu pipeline\_tag: text-generation --- BLOOM LM ======== *BigScience Large Open-science Open-access Multilingual Language Model* ----------------------------------------------------------------------- ### Model Card ![](URL alt=) Version 1.0 / 26.May.2022 Model Card for Bloom-560m ========================= Table of Contents ----------------- 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Recommendations 5. Training Data 6. Evaluation 7. Environmental Impact 8. Technical Specifications 9. Citation 10. Glossary and Calculations 11. More Information 12. Model Card Authors 13. Model Card Contact Model Details ------------- ### Model Description *This section provides information for anyone who wants to know about the model.* * Developed by: BigScience (website) + All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* * Model Type: Transformer-based Language Model * Version: 1.0.0 * Languages: Multiple; see training data * License: RAIL License v1.0 (link) * Release Date Estimate: Monday, 11.July.2022 * Funded by: + The French government. + Hugging Face (website). + Organizations of contributors. *(Further breakdown of organizations forthcoming.)* Uses ---- *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### Direct Use * Text generation * Exploring characteristics of language generated by a language model + Examples: Cloze tests, counterfactuals, generations with reframings #### Downstream Use * Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### Out-of-scope Uses Using the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: * Usage in biomedical domains, political and legal domains, or finance domains * Usage for evaluating or scoring individuals, such as for employment, education, or credit * Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### Misuse Intentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes: * Spam generation * Disinformation and influence operations * Disparagement and defamation * Harassment and abuse * Deception * Unconsented impersonation and imitation * Unconsented surveillance * Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions ### Intended Users #### Direct Users * General Public * Researchers * Students * Educators * Engineers/developers * Non-commercial entities * Community advocates, including human and civil rights groups #### Indirect Users * Users of derivatives created by Direct Users, such as those using software with an intended use * Users of Derivatives of the Model, as described in the License #### Others Affected (Parties Prenantes) * People and groups referred to by the LLM * People and groups exposed to outputs of, or decisions based on, the LLM * People and groups whose original work is included in the LLM Bias, Risks and Limitations --------------------------- *This section identifies foreseeable harms and misunderstandings.* Model may: * Overrepresent some viewpoints and underrepresent others * Contain stereotypes * Contain personal information * Generate: + Hateful, abusive, or violent language + Discriminatory or prejudicial language + Content that may not be appropriate for all settings, including sexual content * Make errors, including producing incorrect information as if it were factual * Generate irrelevant or repetitive outputs ### Recommendations *This section provides information on warnings and potential mitigations.* * Indirect users should be made aware when the content they're working with is created by the LLM. * Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. * Models pretrained with the LLM should include an updated Model Card. * Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. Training Data ------------- *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* Details for each dataset are provided in individual Data Cards. Training data includes: * 45 natural languages * 12 programming languages * In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.) #### Languages The pie chart shows the distribution of languages in training data. !pie chart showing the distribution of languages in training data The following table shows the further distribution of Niger-Congo and Indic languages in the training data. The following table shows the distribution of programming languages. Extension: java, Language: Java, Number of files: 5,407,724 Extension: php, Language: PHP, Number of files: 4,942,186 Extension: cpp, Language: C++, Number of files: 2,503,930 Extension: py, Language: Python, Number of files: 2,435,072 Extension: js, Language: JavaScript, Number of files: 1,905,518 Extension: cs, Language: C#, Number of files: 1,577,347 Extension: rb, Language: Ruby, Number of files: 6,78,413 Extension: cc, Language: C++, Number of files: 443,054 Extension: hpp, Language: C++, Number of files: 391,048 Extension: lua, Language: Lua, Number of files: 352,317 Extension: go, Language: GO, Number of files: 227,763 Extension: ts, Language: TypeScript, Number of files: 195,254 Extension: C, Language: C, Number of files: 134,537 Extension: scala, Language: Scala, Number of files: 92,052 Extension: hh, Language: C++, Number of files: 67,161 Extension: H, Language: C++, Number of files: 55,899 Extension: tsx, Language: TypeScript, Number of files: 33,107 Extension: rs, Language: Rust, Number of files: 29,693 Extension: phpt, Language: PHP, Number of files: 9,702 Extension: c++, Language: C++, Number of files: 1,342 Extension: h++, Language: C++, Number of files: 791 Extension: php3, Language: PHP, Number of files: 540 Extension: phps, Language: PHP, Number of files: 270 Extension: php5, Language: PHP, Number of files: 166 Extension: php4, Language: PHP, Number of files: 29 Evaluation ---------- *This section describes the evaluation protocols and provides the results.* ### Metrics *This section describes the different ways performance is calculated and why.* Includes: And multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)* ### Factors *This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* * Language, such as English or Yoruba * Domain, such as newswire or stories * Demographic characteristics, such as gender or nationality ### Results *Results are based on the Factors and Metrics.* Train-time Evaluation: As of 25.May.2022, 15:00 PST: * Training Loss: 2.0 * Validation Loss: 2.2 * Perplexity: 8.9 (More evaluation scores forthcoming at the end of model training.) Environmental Impact -------------------- The training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. Estimated carbon emissions: *(Forthcoming upon completion of training.)* Estimated electricity usage: *(Forthcoming upon completion of training.)* Technical Specifications ------------------------ *This section provides information for people who work on model development.* Please see the BLOOM training README for full details on replicating training. Model Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code): * Decoder-only architecture * Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper) * ALiBI positional encodings (see paper), with GeLU activation functions * 559,214,592 parameters: + 256,901,120 embedding parameters + 24 layers, 16 attention heads + Hidden layers are 1024-dimensional + Sequence length of 2048 tokens (see BLOOM tokenizer, tokenizer description) Objective Function: Cross Entropy with mean reduction (see API documentation). Compute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement). * Hardware: 384 A100 80GB GPUs (48 nodes): + Additional 32 A100 80GB GPUs (4 nodes) in reserve + 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links + CPU: AMD + CPU memory: 512GB per node + GPU memory: 640GB per node + Inter-node connect: Omni-Path Architecture (OPA) + NCCL-communications network: a fully dedicated subnet + Disc IO network: shared network with other types of nodes * Software: + Megatron-DeepSpeed (Github link) + DeepSpeed (Github link) + PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link) + apex (Github link) ### Training Training logs: Tensorboard link * Training throughput: About 150 TFLOPs per GPU * Number of epochs: 1 (*current target*) * Dates: + Started 11th March, 2022 11:42am PST + Ended 5th July, 2022 * Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes) * Server training location: Île-de-France, France ### Tokenization The BLOOM tokenizer (link) is a learned subword tokenizer trained using: * A byte-level Byte Pair Encoding (BPE) algorithm * A simple pre-tokenization rule, no normalization * A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. Cite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022 Glossary and Calculations ------------------------- *This section defines common terms and how metrics are calculated.* * Loss: A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. * Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. * High-stakes settings: Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed Artificial Intelligence (AI) Act. * Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act. * Human rights: Includes those rights defined in the Universal Declaration of Human Rights. * Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as "personal data" in the European Union's General Data Protection Regulation; and "personal information" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law. * Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1) * Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. More Information ---------------- ### Dataset Creation Blog post detailing the design choices during the dataset creation: URL ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL More details on the architecture/optimizer: URL Blog post on the hardware/engineering side: URL Details on the distributed setup used for the training: URL Tensorboard updated during the training: URL Insights on how to approach training, negative results: URL Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL ### Initial Results Initial prompting experiments using interim checkpoints: URL Model Card Authors ------------------ *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff Model Card Contact ------------------ Send Questions to: bigscience-contact@URL
[ "### Model Card\n\n\n![](URL alt=)\nVersion 1.0 / 26.May.2022\n\n\nModel Card for Bloom-560m\n=========================\n\n\nTable of Contents\n-----------------\n\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Recommendations\n5. Training Data\n6. Evaluation\n7. Environmental Impact\n8. Technical Specifications\n9. Citation\n10. Glossary and Calculations\n11. More Information\n12. Model Card Authors\n13. Model Card Contact\n\n\nModel Details\n-------------", "### Model Description\n\n\n*This section provides information for anyone who wants to know about the model.*\n\n\n* Developed by: BigScience (website)\n\n\n\t+ All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*\n* Model Type: Transformer-based Language Model\n* Version: 1.0.0\n* Languages: Multiple; see training data\n* License: RAIL License v1.0 (link)\n* Release Date Estimate: Monday, 11.July.2022\n* Funded by:\n\n\n\t+ The French government.\n\t+ Hugging Face (website).\n\t+ Organizations of contributors. *(Further breakdown of organizations forthcoming.)*\n\n\nUses\n----\n\n\n*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.\nIt provides information for anyone considering using the model or who is affected by the model.*", "### Intended Use\n\n\nThis model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.", "#### Direct Use\n\n\n* Text generation\n* Exploring characteristics of language generated by a language model\n\n\n\t+ Examples: Cloze tests, counterfactuals, generations with reframings", "#### Downstream Use\n\n\n* Tasks that leverage language models include: Information Extraction, Question Answering, Summarization", "### Misuse and Out-of-scope Use\n\n\n*This section addresses what users ought not do with the model.*\n\n\nSee the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.", "#### Out-of-scope Uses\n\n\nUsing the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.", "##### Out-of-scope Uses Include:\n\n\n* Usage in biomedical domains, political and legal domains, or finance domains\n* Usage for evaluating or scoring individuals, such as for employment, education, or credit\n* Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct", "#### Misuse\n\n\nIntentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:\n\n\n* Spam generation\n* Disinformation and influence operations\n* Disparagement and defamation\n* Harassment and abuse\n* Deception\n* Unconsented impersonation and imitation\n* Unconsented surveillance\n* Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions", "### Intended Users", "#### Direct Users\n\n\n* General Public\n* Researchers\n* Students\n* Educators\n* Engineers/developers\n* Non-commercial entities\n* Community advocates, including human and civil rights groups", "#### Indirect Users\n\n\n* Users of derivatives created by Direct Users, such as those using software with an intended use\n* Users of Derivatives of the Model, as described in the License", "#### Others Affected (Parties Prenantes)\n\n\n* People and groups referred to by the LLM\n* People and groups exposed to outputs of, or decisions based on, the LLM\n* People and groups whose original work is included in the LLM\n\n\nBias, Risks and Limitations\n---------------------------\n\n\n*This section identifies foreseeable harms and misunderstandings.*\n\n\nModel may:\n\n\n* Overrepresent some viewpoints and underrepresent others\n* Contain stereotypes\n* Contain personal information\n* Generate:\n\n\n\t+ Hateful, abusive, or violent language\n\t+ Discriminatory or prejudicial language\n\t+ Content that may not be appropriate for all settings, including sexual content\n* Make errors, including producing incorrect information as if it were factual\n* Generate irrelevant or repetitive outputs", "### Recommendations\n\n\n*This section provides information on warnings and potential mitigations.*\n\n\n* Indirect users should be made aware when the content they're working with is created by the LLM.\n* Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.\n* Models pretrained with the LLM should include an updated Model Card.\n* Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.\n\n\nTraining Data\n-------------\n\n\n*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*\n\n\nDetails for each dataset are provided in individual Data Cards.\n\n\nTraining data includes:\n\n\n* 45 natural languages\n* 12 programming languages\n* In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)", "#### Languages\n\n\nThe pie chart shows the distribution of languages in training data.\n\n\n!pie chart showing the distribution of languages in training data\n\n\nThe following table shows the further distribution of Niger-Congo and Indic languages in the training data.\n\n\n\nThe following table shows the distribution of programming languages.\n\n\nExtension: java, Language: Java, Number of files: 5,407,724\nExtension: php, Language: PHP, Number of files: 4,942,186\nExtension: cpp, Language: C++, Number of files: 2,503,930\nExtension: py, Language: Python, Number of files: 2,435,072\nExtension: js, Language: JavaScript, Number of files: 1,905,518\nExtension: cs, Language: C#, Number of files: 1,577,347\nExtension: rb, Language: Ruby, Number of files: 6,78,413\nExtension: cc, Language: C++, Number of files: 443,054\nExtension: hpp, Language: C++, Number of files: 391,048\nExtension: lua, Language: Lua, Number of files: 352,317\nExtension: go, Language: GO, Number of files: 227,763\nExtension: ts, Language: TypeScript, Number of files: 195,254\nExtension: C, Language: C, Number of files: 134,537\nExtension: scala, Language: Scala, Number of files: 92,052\nExtension: hh, Language: C++, Number of files: 67,161\nExtension: H, Language: C++, Number of files: 55,899\nExtension: tsx, Language: TypeScript, Number of files: 33,107\nExtension: rs, Language: Rust, Number of files: 29,693\nExtension: phpt, Language: PHP, Number of files: 9,702\nExtension: c++, Language: C++, Number of files: 1,342\nExtension: h++, Language: C++, Number of files: 791\nExtension: php3, Language: PHP, Number of files: 540\nExtension: phps, Language: PHP, Number of files: 270\nExtension: php5, Language: PHP, Number of files: 166\nExtension: php4, Language: PHP, Number of files: 29\n\n\nEvaluation\n----------\n\n\n*This section describes the evaluation protocols and provides the results.*", "### Metrics\n\n\n*This section describes the different ways performance is calculated and why.*\n\n\nIncludes:\n\n\n\nAnd multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)*", "### Factors\n\n\n*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*\n\n\n* Language, such as English or Yoruba\n* Domain, such as newswire or stories\n* Demographic characteristics, such as gender or nationality", "### Results\n\n\n*Results are based on the Factors and Metrics.*\n\n\nTrain-time Evaluation:\n\n\nAs of 25.May.2022, 15:00 PST:\n\n\n* Training Loss: 2.0\n* Validation Loss: 2.2\n* Perplexity: 8.9\n\n\n(More evaluation scores forthcoming at the end of model training.)\n\n\nEnvironmental Impact\n--------------------\n\n\nThe training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.\n\n\nEstimated carbon emissions: *(Forthcoming upon completion of training.)*\n\n\nEstimated electricity usage: *(Forthcoming upon completion of training.)*\n\n\nTechnical Specifications\n------------------------\n\n\n*This section provides information for people who work on model development.*\n\n\nPlease see the BLOOM training README for full details on replicating training.\n\n\nModel Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code):\n\n\n* Decoder-only architecture\n* Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper)\n* ALiBI positional encodings (see paper), with GeLU activation functions\n* 559,214,592 parameters:\n\n\n\t+ 256,901,120 embedding parameters\n\t+ 24 layers, 16 attention heads\n\t+ Hidden layers are 1024-dimensional\n\t+ Sequence length of 2048 tokens (see BLOOM tokenizer, tokenizer description)\n\n\nObjective Function: Cross Entropy with mean reduction (see API documentation).\n\n\nCompute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement).\n\n\n* Hardware: 384 A100 80GB GPUs (48 nodes):\n\n\n\t+ Additional 32 A100 80GB GPUs (4 nodes) in reserve\n\t+ 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links\n\t+ CPU: AMD\n\t+ CPU memory: 512GB per node\n\t+ GPU memory: 640GB per node\n\t+ Inter-node connect: Omni-Path Architecture (OPA)\n\t+ NCCL-communications network: a fully dedicated subnet\n\t+ Disc IO network: shared network with other types of nodes\n* Software:\n\n\n\t+ Megatron-DeepSpeed (Github link)\n\t+ DeepSpeed (Github link)\n\t+ PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link)\n\t+ apex (Github link)", "### Training\n\n\nTraining logs: Tensorboard link\n\n\n* Training throughput: About 150 TFLOPs per GPU\n* Number of epochs: 1 (*current target*)\n* Dates:\n\n\n\t+ Started 11th March, 2022 11:42am PST\n\t+ Ended 5th July, 2022\n* Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)\n* Server training location: Île-de-France, France", "### Tokenization\n\n\nThe BLOOM tokenizer (link) is a learned subword tokenizer trained using:\n\n\n* A byte-level Byte Pair Encoding (BPE) algorithm\n* A simple pre-tokenization rule, no normalization\n* A vocabulary size of 250,680\n\n\nIt was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.\n\n\nCite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022\n\n\nGlossary and Calculations\n-------------------------\n\n\n*This section defines common terms and how metrics are calculated.*\n\n\n* Loss: A calculation of the difference between what the model has learned and what the data shows (\"groundtruth\"). The lower the loss, the better. The training process aims to minimize the loss.\n* Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.\n* High-stakes settings: Such as those identified as \"high-risk AI systems\" and \"unacceptable risk AI systems\" in the European Union's proposed Artificial Intelligence (AI) Act.\n* Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act.\n* Human rights: Includes those rights defined in the Universal Declaration of Human Rights.\n* Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as \"personal data\" in the European Union's General Data Protection Regulation; and \"personal information\" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law.\n* Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1)\n* Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.\n\n\nMore Information\n----------------", "### Dataset Creation\n\n\nBlog post detailing the design choices during the dataset creation: URL", "### Technical Specifications\n\n\nBlog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL\n\n\nMore details on the architecture/optimizer: URL\n\n\nBlog post on the hardware/engineering side: URL\n\n\nDetails on the distributed setup used for the training: URL\n\n\nTensorboard updated during the training: URL\n\n\nInsights on how to approach training, negative results: URL\n\n\nDetails on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL", "### Initial Results\n\n\nInitial prompting experiments using interim checkpoints: URL\n\n\nModel Card Authors\n------------------\n\n\n*Ordered roughly chronologically and by amount of time spent.*\n\n\nMargaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff\n\n\nModel Card Contact\n------------------\n\n\nSend Questions to: bigscience-contact@URL" ]
[ "TAGS\n#transformers #safetensors #bloom #text-generation #arxiv-1909.08053 #arxiv-2110.02861 #arxiv-2108.12409 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "### Model Card\n\n\n![](URL alt=)\nVersion 1.0 / 26.May.2022\n\n\nModel Card for Bloom-560m\n=========================\n\n\nTable of Contents\n-----------------\n\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Recommendations\n5. Training Data\n6. Evaluation\n7. Environmental Impact\n8. Technical Specifications\n9. Citation\n10. Glossary and Calculations\n11. More Information\n12. Model Card Authors\n13. Model Card Contact\n\n\nModel Details\n-------------", "### Model Description\n\n\n*This section provides information for anyone who wants to know about the model.*\n\n\n* Developed by: BigScience (website)\n\n\n\t+ All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*\n* Model Type: Transformer-based Language Model\n* Version: 1.0.0\n* Languages: Multiple; see training data\n* License: RAIL License v1.0 (link)\n* Release Date Estimate: Monday, 11.July.2022\n* Funded by:\n\n\n\t+ The French government.\n\t+ Hugging Face (website).\n\t+ Organizations of contributors. *(Further breakdown of organizations forthcoming.)*\n\n\nUses\n----\n\n\n*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.\nIt provides information for anyone considering using the model or who is affected by the model.*", "### Intended Use\n\n\nThis model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.", "#### Direct Use\n\n\n* Text generation\n* Exploring characteristics of language generated by a language model\n\n\n\t+ Examples: Cloze tests, counterfactuals, generations with reframings", "#### Downstream Use\n\n\n* Tasks that leverage language models include: Information Extraction, Question Answering, Summarization", "### Misuse and Out-of-scope Use\n\n\n*This section addresses what users ought not do with the model.*\n\n\nSee the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.", "#### Out-of-scope Uses\n\n\nUsing the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.", "##### Out-of-scope Uses Include:\n\n\n* Usage in biomedical domains, political and legal domains, or finance domains\n* Usage for evaluating or scoring individuals, such as for employment, education, or credit\n* Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct", "#### Misuse\n\n\nIntentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:\n\n\n* Spam generation\n* Disinformation and influence operations\n* Disparagement and defamation\n* Harassment and abuse\n* Deception\n* Unconsented impersonation and imitation\n* Unconsented surveillance\n* Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions", "### Intended Users", "#### Direct Users\n\n\n* General Public\n* Researchers\n* Students\n* Educators\n* Engineers/developers\n* Non-commercial entities\n* Community advocates, including human and civil rights groups", "#### Indirect Users\n\n\n* Users of derivatives created by Direct Users, such as those using software with an intended use\n* Users of Derivatives of the Model, as described in the License", "#### Others Affected (Parties Prenantes)\n\n\n* People and groups referred to by the LLM\n* People and groups exposed to outputs of, or decisions based on, the LLM\n* People and groups whose original work is included in the LLM\n\n\nBias, Risks and Limitations\n---------------------------\n\n\n*This section identifies foreseeable harms and misunderstandings.*\n\n\nModel may:\n\n\n* Overrepresent some viewpoints and underrepresent others\n* Contain stereotypes\n* Contain personal information\n* Generate:\n\n\n\t+ Hateful, abusive, or violent language\n\t+ Discriminatory or prejudicial language\n\t+ Content that may not be appropriate for all settings, including sexual content\n* Make errors, including producing incorrect information as if it were factual\n* Generate irrelevant or repetitive outputs", "### Recommendations\n\n\n*This section provides information on warnings and potential mitigations.*\n\n\n* Indirect users should be made aware when the content they're working with is created by the LLM.\n* Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.\n* Models pretrained with the LLM should include an updated Model Card.\n* Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.\n\n\nTraining Data\n-------------\n\n\n*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*\n\n\nDetails for each dataset are provided in individual Data Cards.\n\n\nTraining data includes:\n\n\n* 45 natural languages\n* 12 programming languages\n* In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)", "#### Languages\n\n\nThe pie chart shows the distribution of languages in training data.\n\n\n!pie chart showing the distribution of languages in training data\n\n\nThe following table shows the further distribution of Niger-Congo and Indic languages in the training data.\n\n\n\nThe following table shows the distribution of programming languages.\n\n\nExtension: java, Language: Java, Number of files: 5,407,724\nExtension: php, Language: PHP, Number of files: 4,942,186\nExtension: cpp, Language: C++, Number of files: 2,503,930\nExtension: py, Language: Python, Number of files: 2,435,072\nExtension: js, Language: JavaScript, Number of files: 1,905,518\nExtension: cs, Language: C#, Number of files: 1,577,347\nExtension: rb, Language: Ruby, Number of files: 6,78,413\nExtension: cc, Language: C++, Number of files: 443,054\nExtension: hpp, Language: C++, Number of files: 391,048\nExtension: lua, Language: Lua, Number of files: 352,317\nExtension: go, Language: GO, Number of files: 227,763\nExtension: ts, Language: TypeScript, Number of files: 195,254\nExtension: C, Language: C, Number of files: 134,537\nExtension: scala, Language: Scala, Number of files: 92,052\nExtension: hh, Language: C++, Number of files: 67,161\nExtension: H, Language: C++, Number of files: 55,899\nExtension: tsx, Language: TypeScript, Number of files: 33,107\nExtension: rs, Language: Rust, Number of files: 29,693\nExtension: phpt, Language: PHP, Number of files: 9,702\nExtension: c++, Language: C++, Number of files: 1,342\nExtension: h++, Language: C++, Number of files: 791\nExtension: php3, Language: PHP, Number of files: 540\nExtension: phps, Language: PHP, Number of files: 270\nExtension: php5, Language: PHP, Number of files: 166\nExtension: php4, Language: PHP, Number of files: 29\n\n\nEvaluation\n----------\n\n\n*This section describes the evaluation protocols and provides the results.*", "### Metrics\n\n\n*This section describes the different ways performance is calculated and why.*\n\n\nIncludes:\n\n\n\nAnd multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)*", "### Factors\n\n\n*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*\n\n\n* Language, such as English or Yoruba\n* Domain, such as newswire or stories\n* Demographic characteristics, such as gender or nationality", "### Results\n\n\n*Results are based on the Factors and Metrics.*\n\n\nTrain-time Evaluation:\n\n\nAs of 25.May.2022, 15:00 PST:\n\n\n* Training Loss: 2.0\n* Validation Loss: 2.2\n* Perplexity: 8.9\n\n\n(More evaluation scores forthcoming at the end of model training.)\n\n\nEnvironmental Impact\n--------------------\n\n\nThe training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.\n\n\nEstimated carbon emissions: *(Forthcoming upon completion of training.)*\n\n\nEstimated electricity usage: *(Forthcoming upon completion of training.)*\n\n\nTechnical Specifications\n------------------------\n\n\n*This section provides information for people who work on model development.*\n\n\nPlease see the BLOOM training README for full details on replicating training.\n\n\nModel Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code):\n\n\n* Decoder-only architecture\n* Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper)\n* ALiBI positional encodings (see paper), with GeLU activation functions\n* 559,214,592 parameters:\n\n\n\t+ 256,901,120 embedding parameters\n\t+ 24 layers, 16 attention heads\n\t+ Hidden layers are 1024-dimensional\n\t+ Sequence length of 2048 tokens (see BLOOM tokenizer, tokenizer description)\n\n\nObjective Function: Cross Entropy with mean reduction (see API documentation).\n\n\nCompute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement).\n\n\n* Hardware: 384 A100 80GB GPUs (48 nodes):\n\n\n\t+ Additional 32 A100 80GB GPUs (4 nodes) in reserve\n\t+ 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links\n\t+ CPU: AMD\n\t+ CPU memory: 512GB per node\n\t+ GPU memory: 640GB per node\n\t+ Inter-node connect: Omni-Path Architecture (OPA)\n\t+ NCCL-communications network: a fully dedicated subnet\n\t+ Disc IO network: shared network with other types of nodes\n* Software:\n\n\n\t+ Megatron-DeepSpeed (Github link)\n\t+ DeepSpeed (Github link)\n\t+ PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link)\n\t+ apex (Github link)", "### Training\n\n\nTraining logs: Tensorboard link\n\n\n* Training throughput: About 150 TFLOPs per GPU\n* Number of epochs: 1 (*current target*)\n* Dates:\n\n\n\t+ Started 11th March, 2022 11:42am PST\n\t+ Ended 5th July, 2022\n* Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)\n* Server training location: Île-de-France, France", "### Tokenization\n\n\nThe BLOOM tokenizer (link) is a learned subword tokenizer trained using:\n\n\n* A byte-level Byte Pair Encoding (BPE) algorithm\n* A simple pre-tokenization rule, no normalization\n* A vocabulary size of 250,680\n\n\nIt was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.\n\n\nCite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022\n\n\nGlossary and Calculations\n-------------------------\n\n\n*This section defines common terms and how metrics are calculated.*\n\n\n* Loss: A calculation of the difference between what the model has learned and what the data shows (\"groundtruth\"). The lower the loss, the better. The training process aims to minimize the loss.\n* Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.\n* High-stakes settings: Such as those identified as \"high-risk AI systems\" and \"unacceptable risk AI systems\" in the European Union's proposed Artificial Intelligence (AI) Act.\n* Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act.\n* Human rights: Includes those rights defined in the Universal Declaration of Human Rights.\n* Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as \"personal data\" in the European Union's General Data Protection Regulation; and \"personal information\" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law.\n* Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1)\n* Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.\n\n\nMore Information\n----------------", "### Dataset Creation\n\n\nBlog post detailing the design choices during the dataset creation: URL", "### Technical Specifications\n\n\nBlog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL\n\n\nMore details on the architecture/optimizer: URL\n\n\nBlog post on the hardware/engineering side: URL\n\n\nDetails on the distributed setup used for the training: URL\n\n\nTensorboard updated during the training: URL\n\n\nInsights on how to approach training, negative results: URL\n\n\nDetails on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL", "### Initial Results\n\n\nInitial prompting experiments using interim checkpoints: URL\n\n\nModel Card Authors\n------------------\n\n\n*Ordered roughly chronologically and by amount of time spent.*\n\n\nMargaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff\n\n\nModel Card Contact\n------------------\n\n\nSend Questions to: bigscience-contact@URL" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [vaatsav06/Llama3_medqa_final](https://huggingface.co/vaatsav06/Llama3_medqa_final) * [o2satz/L3_med16](https://huggingface.co/o2satz/L3_med16) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: o2satz/L3_med16 layer_range: - 0 - 32 - model: vaatsav06/Llama3_medqa_final layer_range: - 0 - 32 merge_method: slerp base_model: vaatsav06/Llama3_medqa_final parameters: t: - filter: self_attn value: - 0 - 0.5 - 0.3 - 0.7 - 1 - filter: mlp value: - 1 - 0.5 - 0.7 - 0.3 - 0 - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["vaatsav06/Llama3_medqa_final", "o2satz/L3_med16"]}
o2satz/L3_med16QA
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:vaatsav06/Llama3_medqa_final", "base_model:o2satz/L3_med16", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-26T22:27:32+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-vaatsav06/Llama3_medqa_final #base_model-o2satz/L3_med16 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * vaatsav06/Llama3_medqa_final * o2satz/L3_med16 ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* vaatsav06/Llama3_medqa_final\n* o2satz/L3_med16", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-vaatsav06/Llama3_medqa_final #base_model-o2satz/L3_med16 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* vaatsav06/Llama3_medqa_final\n* o2satz/L3_med16", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) bloom-560m - GGUF - Model creator: https://huggingface.co/bigscience/ - Original model: https://huggingface.co/bigscience/bloom-560m/ | Name | Quant method | Size | | ---- | ---- | ---- | | [bloom-560m.Q2_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q2_K.gguf) | Q2_K | 0.39GB | | [bloom-560m.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.IQ3_XS.gguf) | IQ3_XS | 0.43GB | | [bloom-560m.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.IQ3_S.gguf) | IQ3_S | 0.43GB | | [bloom-560m.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q3_K_S.gguf) | Q3_K_S | 0.43GB | | [bloom-560m.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.IQ3_M.gguf) | IQ3_M | 0.45GB | | [bloom-560m.Q3_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q3_K.gguf) | Q3_K | 0.46GB | | [bloom-560m.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q3_K_M.gguf) | Q3_K_M | 0.46GB | | [bloom-560m.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q3_K_L.gguf) | Q3_K_L | 0.47GB | | [bloom-560m.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.IQ4_XS.gguf) | IQ4_XS | 0.49GB | | [bloom-560m.Q4_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q4_0.gguf) | Q4_0 | 0.5GB | | [bloom-560m.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.IQ4_NL.gguf) | IQ4_NL | 0.5GB | | [bloom-560m.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q4_K_S.gguf) | Q4_K_S | 0.5GB | | [bloom-560m.Q4_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q4_K.gguf) | Q4_K | 0.52GB | | [bloom-560m.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q4_K_M.gguf) | Q4_K_M | 0.52GB | | [bloom-560m.Q4_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q4_1.gguf) | Q4_1 | 0.53GB | | [bloom-560m.Q5_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q5_0.gguf) | Q5_0 | 0.57GB | | [bloom-560m.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q5_K_S.gguf) | Q5_K_S | 0.57GB | | [bloom-560m.Q5_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q5_K.gguf) | Q5_K | 0.58GB | | [bloom-560m.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q5_K_M.gguf) | Q5_K_M | 0.58GB | | [bloom-560m.Q5_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q5_1.gguf) | Q5_1 | 0.6GB | | [bloom-560m.Q6_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-560m-gguf/blob/main/bloom-560m.Q6_K.gguf) | Q6_K | 0.64GB | Original model description: --- license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zhs - zht - zu pipeline_tag: text-generation --- <h1 style='text-align: center '>BLOOM LM</h1> <h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2> <h3 style='text-align: center '>Model Card</h3> <img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Version 1.0 / 26.May.2022 # Model Card for Bloom-560m <!-- Provide a quick summary of what the model is/does. --> ## Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Recommendations](#recommendations) 5. [Training Data](#training-data) 6. [Evaluation](#evaluation) 7. [Environmental Impact](#environmental-impact) 8. [Technical Specifications](#techincal-specifications) 9. [Citation](#citation) 10. [Glossary and Calculations](#glossary-and-calculations) 11. [More Information](#more-information) 12. [Model Card Authors](#model-card-authors) 13. [Model Card Contact](#model-card-contact) ## Model Details ### Model Description *This section provides information for anyone who wants to know about the model.* - **Developed by:** BigScience ([website](https://bigscience.huggingface.co)) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* - **Model Type:** Transformer-based Language Model - **Version:** 1.0.0 - **Languages:** Multiple; see [training data](#training-data) - **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license)) - **Release Date Estimate:** Monday, 11.July.2022 - **Funded by:** * The French government. * Hugging Face ([website](https://huggingface.co)). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* ## Uses *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### **Direct Use** - Text generation - Exploring characteristics of language generated by a language model - Examples: Cloze tests, counterfactuals, generations with reframings #### **Downstream Use** - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### **Out-of-scope Uses** Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.  The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: - Usage in biomedical domains, political and legal domains, or finance domains - Usage for evaluating or scoring individuals, such as for employment, education, or credit - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### **Misuse** Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes: - Spam generation - Disinformation and influence operations - Disparagement and defamation - Harassment and abuse - [Deception](#deception) - Unconsented impersonation and imitation - Unconsented surveillance - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license) ### Intended Users #### **Direct Users** - General Public - Researchers - Students - Educators - Engineers/developers - Non-commercial entities - Community advocates, including human and civil rights groups #### Indirect Users - Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use) - Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license) #### Others Affected (Parties Prenantes) - People and groups referred to by the LLM - People and groups exposed to outputs of, or decisions based on, the LLM - People and groups whose original work is included in the LLM ## Bias, Risks and Limitations *This section identifies foreseeable harms and misunderstandings.* Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information as if it were factual - Generate irrelevant or repetitive outputs ### Recommendations *This section provides information on warnings and potential mitigations.* - Indirect users should be made aware when the content they're working with is created by the LLM. - Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary. - Models pretrained with the LLM should include an updated Model Card. - Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. ## Training Data *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus). Training data includes: - 45 natural languages - 12 programming languages - In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.) #### **Languages** The pie chart shows the distribution of languages in training data. ![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true) **The following table shows the further distribution of Niger-Congo and Indic languages in the training data.** | Niger Congo | Percentage | | Indic | Percentage | |----------------|------------ |------ |-----------|------------| | Chi Tumbuka | 0.00002 | | Assamese | 0.01 | | Kikuyu | 0.00004 | | Odia | 0.04 | | Bambara | 0.00004 | | Gujarati | 0.04 | | Akan | 0.00007 | | Marathi | 0.05 | | Xitsonga | 0.00007 | | Punjabi | 0.05 | | Sesotho | 0.00007 | | Kannada | 0.06 | | Chi Chewa | 0.0001 | | Nepali | 0.07 | | Setswana | 0.0002 | | Telugu | 0.09 | | Northern Sotho | 0.0002 | | Malayalam | 0.10 | | Fon | 0.0002 | | Urdu | 0.10 | | Kirundi | 0.0003 | | Tamil | 0.20 | | Wolof | 0.0004 | | Bengali | 0.50 | | Kuganda | 0.0004 | | Hindi | 0.70 | | Chi Shona | 0.001 | | Isi Zulu | 0.001 | | Igbo | 0.001 | | Xhosa | 0.001 | | Kinyarwanda | 0.003 | | Yoruba | 0.006 | | Swahili | 0.02 | **The following table shows the distribution of programming languages.** | Extension | Language | Number of files | |----------------|------------|-----------------| | java | Java | 5,407,724 | | php | PHP | 4,942,186 | | cpp | C++ | 2,503,930 | | py | Python | 2,435,072 | | js | JavaScript | 1,905,518 | | cs | C# | 1,577,347 | | rb | Ruby | 6,78,413 | | cc | C++ | 443,054 | | hpp | C++ | 391,048 | | lua | Lua | 352,317 | | go | GO | 227,763 | | ts | TypeScript | 195,254 | | C | C | 134,537 | | scala | Scala | 92,052 | | hh | C++ | 67,161 | | H | C++ | 55,899 | | tsx | TypeScript | 33,107 | | rs | Rust | 29,693 | | phpt | PHP | 9,702 | | c++ | C++ | 1,342 | | h++ | C++ | 791 | | php3 | PHP | 540 | | phps | PHP | 270 | | php5 | PHP | 166 | | php4 | PHP | 29 | ## Evaluation *This section describes the evaluation protocols and provides the results.* ### Metrics *This section describes the different ways performance is calculated and why.* Includes: | Metric | Why chosen | |--------------------|--------------------------------------------------------------------| | [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training | | Cross Entropy [Loss](#loss) | Standard objective for language models. | And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_ ### Factors *This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* - Language, such as English or Yoruba - Domain, such as newswire or stories - Demographic characteristics, such as gender or nationality ### Results *Results are based on the [Factors](#factors) and [Metrics](#metrics).* **Train-time Evaluation:** As of 25.May.2022, 15:00 PST: - Training Loss: 2.0 - Validation Loss: 2.2 - Perplexity: 8.9 (More evaluation scores forthcoming at the end of model training.) ## Environmental Impact The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. **Estimated carbon emissions:** *(Forthcoming upon completion of training.)* **Estimated electricity usage:** *(Forthcoming upon completion of training.)* ## Technical Specifications *This section provides information for people who work on model development.* Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training. **Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)): * Decoder-only architecture * Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf)) * ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions * 559,214,592 parameters: * 256,901,120 embedding parameters * 24 layers, 16 attention heads * Hidden layers are 1024-dimensional * Sequence length of 2048 tokens (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization)) **Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)). **Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)). * Hardware: 384 A100 80GB GPUs (48 nodes): * Additional 32 A100 80GB GPUs (4 nodes) in reserve * 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links * CPU: AMD * CPU memory: 512GB per node * GPU memory: 640GB per node * Inter-node connect: Omni-Path Architecture (OPA) * NCCL-communications network: a fully dedicated subnet * Disc IO network: shared network with other types of nodes * Software: * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed)) * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed)) * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch)) * apex ([Github link](https://github.com/NVIDIA/apex)) ### **Training** Training logs: [Tensorboard link](https://huggingface.co/bigscience/tr11e-350M-logs) - Training throughput: About 150 TFLOPs per GPU - Number of epochs: 1 (*current target*) - Dates: - Started 11th March, 2022 11:42am PST - Ended 5th July, 2022 - Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes) - Server training location: Île-de-France, France ### **Tokenization** The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using: - A byte-level Byte Pair Encoding (BPE) algorithm - A simple pre-tokenization rule, no normalization - A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. ## Citation **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022 ## Glossary and Calculations *This section defines common terms and how metrics are calculated.* - <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. - <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. - <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/). - <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf). - <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf). - <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm). - <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf)) - <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. ## More Information ### Dataset Creation Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md ### Initial Results Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book ## Model Card Authors *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff ## Model Card Contact **Send Questions to:** [email protected]
{}
RichardErkhov/bigscience_-_bloom-560m-gguf
null
[ "gguf", "arxiv:1909.08053", "arxiv:2110.02861", "arxiv:2108.12409", "region:us" ]
null
2024-04-26T22:27:47+00:00
[ "1909.08053", "2110.02861", "2108.12409" ]
[]
TAGS #gguf #arxiv-1909.08053 #arxiv-2110.02861 #arxiv-2108.12409 #region-us
Quantization made by Richard Erkhov. Github Discord Request more models bloom-560m - GGUF * Model creator: URL * Original model: URL Name: bloom-560m.Q2\_K.gguf, Quant method: Q2\_K, Size: 0.39GB Name: bloom-560m.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 0.43GB Name: bloom-560m.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 0.43GB Name: bloom-560m.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 0.43GB Name: bloom-560m.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 0.45GB Name: bloom-560m.Q3\_K.gguf, Quant method: Q3\_K, Size: 0.46GB Name: bloom-560m.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 0.46GB Name: bloom-560m.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 0.47GB Name: bloom-560m.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 0.49GB Name: bloom-560m.Q4\_0.gguf, Quant method: Q4\_0, Size: 0.5GB Name: bloom-560m.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 0.5GB Name: bloom-560m.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 0.5GB Name: bloom-560m.Q4\_K.gguf, Quant method: Q4\_K, Size: 0.52GB Name: bloom-560m.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 0.52GB Name: bloom-560m.Q4\_1.gguf, Quant method: Q4\_1, Size: 0.53GB Name: bloom-560m.Q5\_0.gguf, Quant method: Q5\_0, Size: 0.57GB Name: bloom-560m.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 0.57GB Name: bloom-560m.Q5\_K.gguf, Quant method: Q5\_K, Size: 0.58GB Name: bloom-560m.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 0.58GB Name: bloom-560m.Q5\_1.gguf, Quant method: Q5\_1, Size: 0.6GB Name: bloom-560m.Q6\_K.gguf, Quant method: Q6\_K, Size: 0.64GB Original model description: --------------------------- license: bigscience-bloom-rail-1.0 language: * ak * ar * as * bm * bn * ca * code * en * es * eu * fon * fr * gu * hi * id * ig * ki * kn * lg * ln * ml * mr * ne * nso * ny * or * pa * pt * rn * rw * sn * st * sw * ta * te * tn * ts * tum * tw * ur * vi * wo * xh * yo * zh * zhs * zht * zu pipeline\_tag: text-generation --- BLOOM LM ======== *BigScience Large Open-science Open-access Multilingual Language Model* ----------------------------------------------------------------------- ### Model Card ![](URL alt=) Version 1.0 / 26.May.2022 Model Card for Bloom-560m ========================= Table of Contents ----------------- 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Recommendations 5. Training Data 6. Evaluation 7. Environmental Impact 8. Technical Specifications 9. Citation 10. Glossary and Calculations 11. More Information 12. Model Card Authors 13. Model Card Contact Model Details ------------- ### Model Description *This section provides information for anyone who wants to know about the model.* * Developed by: BigScience (website) + All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* * Model Type: Transformer-based Language Model * Version: 1.0.0 * Languages: Multiple; see training data * License: RAIL License v1.0 (link) * Release Date Estimate: Monday, 11.July.2022 * Funded by: + The French government. + Hugging Face (website). + Organizations of contributors. *(Further breakdown of organizations forthcoming.)* Uses ---- *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### Direct Use * Text generation * Exploring characteristics of language generated by a language model + Examples: Cloze tests, counterfactuals, generations with reframings #### Downstream Use * Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### Out-of-scope Uses Using the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: * Usage in biomedical domains, political and legal domains, or finance domains * Usage for evaluating or scoring individuals, such as for employment, education, or credit * Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### Misuse Intentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes: * Spam generation * Disinformation and influence operations * Disparagement and defamation * Harassment and abuse * Deception * Unconsented impersonation and imitation * Unconsented surveillance * Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions ### Intended Users #### Direct Users * General Public * Researchers * Students * Educators * Engineers/developers * Non-commercial entities * Community advocates, including human and civil rights groups #### Indirect Users * Users of derivatives created by Direct Users, such as those using software with an intended use * Users of Derivatives of the Model, as described in the License #### Others Affected (Parties Prenantes) * People and groups referred to by the LLM * People and groups exposed to outputs of, or decisions based on, the LLM * People and groups whose original work is included in the LLM Bias, Risks and Limitations --------------------------- *This section identifies foreseeable harms and misunderstandings.* Model may: * Overrepresent some viewpoints and underrepresent others * Contain stereotypes * Contain personal information * Generate: + Hateful, abusive, or violent language + Discriminatory or prejudicial language + Content that may not be appropriate for all settings, including sexual content * Make errors, including producing incorrect information as if it were factual * Generate irrelevant or repetitive outputs ### Recommendations *This section provides information on warnings and potential mitigations.* * Indirect users should be made aware when the content they're working with is created by the LLM. * Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. * Models pretrained with the LLM should include an updated Model Card. * Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. Training Data ------------- *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* Details for each dataset are provided in individual Data Cards. Training data includes: * 45 natural languages * 12 programming languages * In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.) #### Languages The pie chart shows the distribution of languages in training data. !pie chart showing the distribution of languages in training data The following table shows the further distribution of Niger-Congo and Indic languages in the training data. The following table shows the distribution of programming languages. Extension: java, Language: Java, Number of files: 5,407,724 Extension: php, Language: PHP, Number of files: 4,942,186 Extension: cpp, Language: C++, Number of files: 2,503,930 Extension: py, Language: Python, Number of files: 2,435,072 Extension: js, Language: JavaScript, Number of files: 1,905,518 Extension: cs, Language: C#, Number of files: 1,577,347 Extension: rb, Language: Ruby, Number of files: 6,78,413 Extension: cc, Language: C++, Number of files: 443,054 Extension: hpp, Language: C++, Number of files: 391,048 Extension: lua, Language: Lua, Number of files: 352,317 Extension: go, Language: GO, Number of files: 227,763 Extension: ts, Language: TypeScript, Number of files: 195,254 Extension: C, Language: C, Number of files: 134,537 Extension: scala, Language: Scala, Number of files: 92,052 Extension: hh, Language: C++, Number of files: 67,161 Extension: H, Language: C++, Number of files: 55,899 Extension: tsx, Language: TypeScript, Number of files: 33,107 Extension: rs, Language: Rust, Number of files: 29,693 Extension: phpt, Language: PHP, Number of files: 9,702 Extension: c++, Language: C++, Number of files: 1,342 Extension: h++, Language: C++, Number of files: 791 Extension: php3, Language: PHP, Number of files: 540 Extension: phps, Language: PHP, Number of files: 270 Extension: php5, Language: PHP, Number of files: 166 Extension: php4, Language: PHP, Number of files: 29 Evaluation ---------- *This section describes the evaluation protocols and provides the results.* ### Metrics *This section describes the different ways performance is calculated and why.* Includes: And multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)* ### Factors *This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* * Language, such as English or Yoruba * Domain, such as newswire or stories * Demographic characteristics, such as gender or nationality ### Results *Results are based on the Factors and Metrics.* Train-time Evaluation: As of 25.May.2022, 15:00 PST: * Training Loss: 2.0 * Validation Loss: 2.2 * Perplexity: 8.9 (More evaluation scores forthcoming at the end of model training.) Environmental Impact -------------------- The training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. Estimated carbon emissions: *(Forthcoming upon completion of training.)* Estimated electricity usage: *(Forthcoming upon completion of training.)* Technical Specifications ------------------------ *This section provides information for people who work on model development.* Please see the BLOOM training README for full details on replicating training. Model Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code): * Decoder-only architecture * Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper) * ALiBI positional encodings (see paper), with GeLU activation functions * 559,214,592 parameters: + 256,901,120 embedding parameters + 24 layers, 16 attention heads + Hidden layers are 1024-dimensional + Sequence length of 2048 tokens (see BLOOM tokenizer, tokenizer description) Objective Function: Cross Entropy with mean reduction (see API documentation). Compute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement). * Hardware: 384 A100 80GB GPUs (48 nodes): + Additional 32 A100 80GB GPUs (4 nodes) in reserve + 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links + CPU: AMD + CPU memory: 512GB per node + GPU memory: 640GB per node + Inter-node connect: Omni-Path Architecture (OPA) + NCCL-communications network: a fully dedicated subnet + Disc IO network: shared network with other types of nodes * Software: + Megatron-DeepSpeed (Github link) + DeepSpeed (Github link) + PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link) + apex (Github link) ### Training Training logs: Tensorboard link * Training throughput: About 150 TFLOPs per GPU * Number of epochs: 1 (*current target*) * Dates: + Started 11th March, 2022 11:42am PST + Ended 5th July, 2022 * Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes) * Server training location: Île-de-France, France ### Tokenization The BLOOM tokenizer (link) is a learned subword tokenizer trained using: * A byte-level Byte Pair Encoding (BPE) algorithm * A simple pre-tokenization rule, no normalization * A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. Cite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022 Glossary and Calculations ------------------------- *This section defines common terms and how metrics are calculated.* * Loss: A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. * Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. * High-stakes settings: Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed Artificial Intelligence (AI) Act. * Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act. * Human rights: Includes those rights defined in the Universal Declaration of Human Rights. * Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as "personal data" in the European Union's General Data Protection Regulation; and "personal information" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law. * Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1) * Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. More Information ---------------- ### Dataset Creation Blog post detailing the design choices during the dataset creation: URL ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL More details on the architecture/optimizer: URL Blog post on the hardware/engineering side: URL Details on the distributed setup used for the training: URL Tensorboard updated during the training: URL Insights on how to approach training, negative results: URL Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL ### Initial Results Initial prompting experiments using interim checkpoints: URL Model Card Authors ------------------ *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff Model Card Contact ------------------ Send Questions to: bigscience-contact@URL
[ "### Model Card\n\n\n![](URL alt=)\nVersion 1.0 / 26.May.2022\n\n\nModel Card for Bloom-560m\n=========================\n\n\nTable of Contents\n-----------------\n\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Recommendations\n5. Training Data\n6. Evaluation\n7. Environmental Impact\n8. Technical Specifications\n9. Citation\n10. Glossary and Calculations\n11. More Information\n12. Model Card Authors\n13. Model Card Contact\n\n\nModel Details\n-------------", "### Model Description\n\n\n*This section provides information for anyone who wants to know about the model.*\n\n\n* Developed by: BigScience (website)\n\n\n\t+ All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*\n* Model Type: Transformer-based Language Model\n* Version: 1.0.0\n* Languages: Multiple; see training data\n* License: RAIL License v1.0 (link)\n* Release Date Estimate: Monday, 11.July.2022\n* Funded by:\n\n\n\t+ The French government.\n\t+ Hugging Face (website).\n\t+ Organizations of contributors. *(Further breakdown of organizations forthcoming.)*\n\n\nUses\n----\n\n\n*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.\nIt provides information for anyone considering using the model or who is affected by the model.*", "### Intended Use\n\n\nThis model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.", "#### Direct Use\n\n\n* Text generation\n* Exploring characteristics of language generated by a language model\n\n\n\t+ Examples: Cloze tests, counterfactuals, generations with reframings", "#### Downstream Use\n\n\n* Tasks that leverage language models include: Information Extraction, Question Answering, Summarization", "### Misuse and Out-of-scope Use\n\n\n*This section addresses what users ought not do with the model.*\n\n\nSee the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.", "#### Out-of-scope Uses\n\n\nUsing the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.", "##### Out-of-scope Uses Include:\n\n\n* Usage in biomedical domains, political and legal domains, or finance domains\n* Usage for evaluating or scoring individuals, such as for employment, education, or credit\n* Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct", "#### Misuse\n\n\nIntentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:\n\n\n* Spam generation\n* Disinformation and influence operations\n* Disparagement and defamation\n* Harassment and abuse\n* Deception\n* Unconsented impersonation and imitation\n* Unconsented surveillance\n* Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions", "### Intended Users", "#### Direct Users\n\n\n* General Public\n* Researchers\n* Students\n* Educators\n* Engineers/developers\n* Non-commercial entities\n* Community advocates, including human and civil rights groups", "#### Indirect Users\n\n\n* Users of derivatives created by Direct Users, such as those using software with an intended use\n* Users of Derivatives of the Model, as described in the License", "#### Others Affected (Parties Prenantes)\n\n\n* People and groups referred to by the LLM\n* People and groups exposed to outputs of, or decisions based on, the LLM\n* People and groups whose original work is included in the LLM\n\n\nBias, Risks and Limitations\n---------------------------\n\n\n*This section identifies foreseeable harms and misunderstandings.*\n\n\nModel may:\n\n\n* Overrepresent some viewpoints and underrepresent others\n* Contain stereotypes\n* Contain personal information\n* Generate:\n\n\n\t+ Hateful, abusive, or violent language\n\t+ Discriminatory or prejudicial language\n\t+ Content that may not be appropriate for all settings, including sexual content\n* Make errors, including producing incorrect information as if it were factual\n* Generate irrelevant or repetitive outputs", "### Recommendations\n\n\n*This section provides information on warnings and potential mitigations.*\n\n\n* Indirect users should be made aware when the content they're working with is created by the LLM.\n* Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.\n* Models pretrained with the LLM should include an updated Model Card.\n* Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.\n\n\nTraining Data\n-------------\n\n\n*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*\n\n\nDetails for each dataset are provided in individual Data Cards.\n\n\nTraining data includes:\n\n\n* 45 natural languages\n* 12 programming languages\n* In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)", "#### Languages\n\n\nThe pie chart shows the distribution of languages in training data.\n\n\n!pie chart showing the distribution of languages in training data\n\n\nThe following table shows the further distribution of Niger-Congo and Indic languages in the training data.\n\n\n\nThe following table shows the distribution of programming languages.\n\n\nExtension: java, Language: Java, Number of files: 5,407,724\nExtension: php, Language: PHP, Number of files: 4,942,186\nExtension: cpp, Language: C++, Number of files: 2,503,930\nExtension: py, Language: Python, Number of files: 2,435,072\nExtension: js, Language: JavaScript, Number of files: 1,905,518\nExtension: cs, Language: C#, Number of files: 1,577,347\nExtension: rb, Language: Ruby, Number of files: 6,78,413\nExtension: cc, Language: C++, Number of files: 443,054\nExtension: hpp, Language: C++, Number of files: 391,048\nExtension: lua, Language: Lua, Number of files: 352,317\nExtension: go, Language: GO, Number of files: 227,763\nExtension: ts, Language: TypeScript, Number of files: 195,254\nExtension: C, Language: C, Number of files: 134,537\nExtension: scala, Language: Scala, Number of files: 92,052\nExtension: hh, Language: C++, Number of files: 67,161\nExtension: H, Language: C++, Number of files: 55,899\nExtension: tsx, Language: TypeScript, Number of files: 33,107\nExtension: rs, Language: Rust, Number of files: 29,693\nExtension: phpt, Language: PHP, Number of files: 9,702\nExtension: c++, Language: C++, Number of files: 1,342\nExtension: h++, Language: C++, Number of files: 791\nExtension: php3, Language: PHP, Number of files: 540\nExtension: phps, Language: PHP, Number of files: 270\nExtension: php5, Language: PHP, Number of files: 166\nExtension: php4, Language: PHP, Number of files: 29\n\n\nEvaluation\n----------\n\n\n*This section describes the evaluation protocols and provides the results.*", "### Metrics\n\n\n*This section describes the different ways performance is calculated and why.*\n\n\nIncludes:\n\n\n\nAnd multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)*", "### Factors\n\n\n*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*\n\n\n* Language, such as English or Yoruba\n* Domain, such as newswire or stories\n* Demographic characteristics, such as gender or nationality", "### Results\n\n\n*Results are based on the Factors and Metrics.*\n\n\nTrain-time Evaluation:\n\n\nAs of 25.May.2022, 15:00 PST:\n\n\n* Training Loss: 2.0\n* Validation Loss: 2.2\n* Perplexity: 8.9\n\n\n(More evaluation scores forthcoming at the end of model training.)\n\n\nEnvironmental Impact\n--------------------\n\n\nThe training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.\n\n\nEstimated carbon emissions: *(Forthcoming upon completion of training.)*\n\n\nEstimated electricity usage: *(Forthcoming upon completion of training.)*\n\n\nTechnical Specifications\n------------------------\n\n\n*This section provides information for people who work on model development.*\n\n\nPlease see the BLOOM training README for full details on replicating training.\n\n\nModel Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code):\n\n\n* Decoder-only architecture\n* Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper)\n* ALiBI positional encodings (see paper), with GeLU activation functions\n* 559,214,592 parameters:\n\n\n\t+ 256,901,120 embedding parameters\n\t+ 24 layers, 16 attention heads\n\t+ Hidden layers are 1024-dimensional\n\t+ Sequence length of 2048 tokens (see BLOOM tokenizer, tokenizer description)\n\n\nObjective Function: Cross Entropy with mean reduction (see API documentation).\n\n\nCompute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement).\n\n\n* Hardware: 384 A100 80GB GPUs (48 nodes):\n\n\n\t+ Additional 32 A100 80GB GPUs (4 nodes) in reserve\n\t+ 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links\n\t+ CPU: AMD\n\t+ CPU memory: 512GB per node\n\t+ GPU memory: 640GB per node\n\t+ Inter-node connect: Omni-Path Architecture (OPA)\n\t+ NCCL-communications network: a fully dedicated subnet\n\t+ Disc IO network: shared network with other types of nodes\n* Software:\n\n\n\t+ Megatron-DeepSpeed (Github link)\n\t+ DeepSpeed (Github link)\n\t+ PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link)\n\t+ apex (Github link)", "### Training\n\n\nTraining logs: Tensorboard link\n\n\n* Training throughput: About 150 TFLOPs per GPU\n* Number of epochs: 1 (*current target*)\n* Dates:\n\n\n\t+ Started 11th March, 2022 11:42am PST\n\t+ Ended 5th July, 2022\n* Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)\n* Server training location: Île-de-France, France", "### Tokenization\n\n\nThe BLOOM tokenizer (link) is a learned subword tokenizer trained using:\n\n\n* A byte-level Byte Pair Encoding (BPE) algorithm\n* A simple pre-tokenization rule, no normalization\n* A vocabulary size of 250,680\n\n\nIt was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.\n\n\nCite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022\n\n\nGlossary and Calculations\n-------------------------\n\n\n*This section defines common terms and how metrics are calculated.*\n\n\n* Loss: A calculation of the difference between what the model has learned and what the data shows (\"groundtruth\"). The lower the loss, the better. The training process aims to minimize the loss.\n* Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.\n* High-stakes settings: Such as those identified as \"high-risk AI systems\" and \"unacceptable risk AI systems\" in the European Union's proposed Artificial Intelligence (AI) Act.\n* Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act.\n* Human rights: Includes those rights defined in the Universal Declaration of Human Rights.\n* Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as \"personal data\" in the European Union's General Data Protection Regulation; and \"personal information\" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law.\n* Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1)\n* Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.\n\n\nMore Information\n----------------", "### Dataset Creation\n\n\nBlog post detailing the design choices during the dataset creation: URL", "### Technical Specifications\n\n\nBlog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL\n\n\nMore details on the architecture/optimizer: URL\n\n\nBlog post on the hardware/engineering side: URL\n\n\nDetails on the distributed setup used for the training: URL\n\n\nTensorboard updated during the training: URL\n\n\nInsights on how to approach training, negative results: URL\n\n\nDetails on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL", "### Initial Results\n\n\nInitial prompting experiments using interim checkpoints: URL\n\n\nModel Card Authors\n------------------\n\n\n*Ordered roughly chronologically and by amount of time spent.*\n\n\nMargaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff\n\n\nModel Card Contact\n------------------\n\n\nSend Questions to: bigscience-contact@URL" ]
[ "TAGS\n#gguf #arxiv-1909.08053 #arxiv-2110.02861 #arxiv-2108.12409 #region-us \n", "### Model Card\n\n\n![](URL alt=)\nVersion 1.0 / 26.May.2022\n\n\nModel Card for Bloom-560m\n=========================\n\n\nTable of Contents\n-----------------\n\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Recommendations\n5. Training Data\n6. Evaluation\n7. Environmental Impact\n8. Technical Specifications\n9. Citation\n10. Glossary and Calculations\n11. More Information\n12. Model Card Authors\n13. Model Card Contact\n\n\nModel Details\n-------------", "### Model Description\n\n\n*This section provides information for anyone who wants to know about the model.*\n\n\n* Developed by: BigScience (website)\n\n\n\t+ All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*\n* Model Type: Transformer-based Language Model\n* Version: 1.0.0\n* Languages: Multiple; see training data\n* License: RAIL License v1.0 (link)\n* Release Date Estimate: Monday, 11.July.2022\n* Funded by:\n\n\n\t+ The French government.\n\t+ Hugging Face (website).\n\t+ Organizations of contributors. *(Further breakdown of organizations forthcoming.)*\n\n\nUses\n----\n\n\n*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.\nIt provides information for anyone considering using the model or who is affected by the model.*", "### Intended Use\n\n\nThis model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.", "#### Direct Use\n\n\n* Text generation\n* Exploring characteristics of language generated by a language model\n\n\n\t+ Examples: Cloze tests, counterfactuals, generations with reframings", "#### Downstream Use\n\n\n* Tasks that leverage language models include: Information Extraction, Question Answering, Summarization", "### Misuse and Out-of-scope Use\n\n\n*This section addresses what users ought not do with the model.*\n\n\nSee the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.", "#### Out-of-scope Uses\n\n\nUsing the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.", "##### Out-of-scope Uses Include:\n\n\n* Usage in biomedical domains, political and legal domains, or finance domains\n* Usage for evaluating or scoring individuals, such as for employment, education, or credit\n* Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct", "#### Misuse\n\n\nIntentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:\n\n\n* Spam generation\n* Disinformation and influence operations\n* Disparagement and defamation\n* Harassment and abuse\n* Deception\n* Unconsented impersonation and imitation\n* Unconsented surveillance\n* Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions", "### Intended Users", "#### Direct Users\n\n\n* General Public\n* Researchers\n* Students\n* Educators\n* Engineers/developers\n* Non-commercial entities\n* Community advocates, including human and civil rights groups", "#### Indirect Users\n\n\n* Users of derivatives created by Direct Users, such as those using software with an intended use\n* Users of Derivatives of the Model, as described in the License", "#### Others Affected (Parties Prenantes)\n\n\n* People and groups referred to by the LLM\n* People and groups exposed to outputs of, or decisions based on, the LLM\n* People and groups whose original work is included in the LLM\n\n\nBias, Risks and Limitations\n---------------------------\n\n\n*This section identifies foreseeable harms and misunderstandings.*\n\n\nModel may:\n\n\n* Overrepresent some viewpoints and underrepresent others\n* Contain stereotypes\n* Contain personal information\n* Generate:\n\n\n\t+ Hateful, abusive, or violent language\n\t+ Discriminatory or prejudicial language\n\t+ Content that may not be appropriate for all settings, including sexual content\n* Make errors, including producing incorrect information as if it were factual\n* Generate irrelevant or repetitive outputs", "### Recommendations\n\n\n*This section provides information on warnings and potential mitigations.*\n\n\n* Indirect users should be made aware when the content they're working with is created by the LLM.\n* Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.\n* Models pretrained with the LLM should include an updated Model Card.\n* Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.\n\n\nTraining Data\n-------------\n\n\n*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*\n\n\nDetails for each dataset are provided in individual Data Cards.\n\n\nTraining data includes:\n\n\n* 45 natural languages\n* 12 programming languages\n* In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)", "#### Languages\n\n\nThe pie chart shows the distribution of languages in training data.\n\n\n!pie chart showing the distribution of languages in training data\n\n\nThe following table shows the further distribution of Niger-Congo and Indic languages in the training data.\n\n\n\nThe following table shows the distribution of programming languages.\n\n\nExtension: java, Language: Java, Number of files: 5,407,724\nExtension: php, Language: PHP, Number of files: 4,942,186\nExtension: cpp, Language: C++, Number of files: 2,503,930\nExtension: py, Language: Python, Number of files: 2,435,072\nExtension: js, Language: JavaScript, Number of files: 1,905,518\nExtension: cs, Language: C#, Number of files: 1,577,347\nExtension: rb, Language: Ruby, Number of files: 6,78,413\nExtension: cc, Language: C++, Number of files: 443,054\nExtension: hpp, Language: C++, Number of files: 391,048\nExtension: lua, Language: Lua, Number of files: 352,317\nExtension: go, Language: GO, Number of files: 227,763\nExtension: ts, Language: TypeScript, Number of files: 195,254\nExtension: C, Language: C, Number of files: 134,537\nExtension: scala, Language: Scala, Number of files: 92,052\nExtension: hh, Language: C++, Number of files: 67,161\nExtension: H, Language: C++, Number of files: 55,899\nExtension: tsx, Language: TypeScript, Number of files: 33,107\nExtension: rs, Language: Rust, Number of files: 29,693\nExtension: phpt, Language: PHP, Number of files: 9,702\nExtension: c++, Language: C++, Number of files: 1,342\nExtension: h++, Language: C++, Number of files: 791\nExtension: php3, Language: PHP, Number of files: 540\nExtension: phps, Language: PHP, Number of files: 270\nExtension: php5, Language: PHP, Number of files: 166\nExtension: php4, Language: PHP, Number of files: 29\n\n\nEvaluation\n----------\n\n\n*This section describes the evaluation protocols and provides the results.*", "### Metrics\n\n\n*This section describes the different ways performance is calculated and why.*\n\n\nIncludes:\n\n\n\nAnd multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)*", "### Factors\n\n\n*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*\n\n\n* Language, such as English or Yoruba\n* Domain, such as newswire or stories\n* Demographic characteristics, such as gender or nationality", "### Results\n\n\n*Results are based on the Factors and Metrics.*\n\n\nTrain-time Evaluation:\n\n\nAs of 25.May.2022, 15:00 PST:\n\n\n* Training Loss: 2.0\n* Validation Loss: 2.2\n* Perplexity: 8.9\n\n\n(More evaluation scores forthcoming at the end of model training.)\n\n\nEnvironmental Impact\n--------------------\n\n\nThe training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.\n\n\nEstimated carbon emissions: *(Forthcoming upon completion of training.)*\n\n\nEstimated electricity usage: *(Forthcoming upon completion of training.)*\n\n\nTechnical Specifications\n------------------------\n\n\n*This section provides information for people who work on model development.*\n\n\nPlease see the BLOOM training README for full details on replicating training.\n\n\nModel Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code):\n\n\n* Decoder-only architecture\n* Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper)\n* ALiBI positional encodings (see paper), with GeLU activation functions\n* 559,214,592 parameters:\n\n\n\t+ 256,901,120 embedding parameters\n\t+ 24 layers, 16 attention heads\n\t+ Hidden layers are 1024-dimensional\n\t+ Sequence length of 2048 tokens (see BLOOM tokenizer, tokenizer description)\n\n\nObjective Function: Cross Entropy with mean reduction (see API documentation).\n\n\nCompute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement).\n\n\n* Hardware: 384 A100 80GB GPUs (48 nodes):\n\n\n\t+ Additional 32 A100 80GB GPUs (4 nodes) in reserve\n\t+ 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links\n\t+ CPU: AMD\n\t+ CPU memory: 512GB per node\n\t+ GPU memory: 640GB per node\n\t+ Inter-node connect: Omni-Path Architecture (OPA)\n\t+ NCCL-communications network: a fully dedicated subnet\n\t+ Disc IO network: shared network with other types of nodes\n* Software:\n\n\n\t+ Megatron-DeepSpeed (Github link)\n\t+ DeepSpeed (Github link)\n\t+ PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link)\n\t+ apex (Github link)", "### Training\n\n\nTraining logs: Tensorboard link\n\n\n* Training throughput: About 150 TFLOPs per GPU\n* Number of epochs: 1 (*current target*)\n* Dates:\n\n\n\t+ Started 11th March, 2022 11:42am PST\n\t+ Ended 5th July, 2022\n* Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)\n* Server training location: Île-de-France, France", "### Tokenization\n\n\nThe BLOOM tokenizer (link) is a learned subword tokenizer trained using:\n\n\n* A byte-level Byte Pair Encoding (BPE) algorithm\n* A simple pre-tokenization rule, no normalization\n* A vocabulary size of 250,680\n\n\nIt was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.\n\n\nCite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022\n\n\nGlossary and Calculations\n-------------------------\n\n\n*This section defines common terms and how metrics are calculated.*\n\n\n* Loss: A calculation of the difference between what the model has learned and what the data shows (\"groundtruth\"). The lower the loss, the better. The training process aims to minimize the loss.\n* Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.\n* High-stakes settings: Such as those identified as \"high-risk AI systems\" and \"unacceptable risk AI systems\" in the European Union's proposed Artificial Intelligence (AI) Act.\n* Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act.\n* Human rights: Includes those rights defined in the Universal Declaration of Human Rights.\n* Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as \"personal data\" in the European Union's General Data Protection Regulation; and \"personal information\" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law.\n* Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1)\n* Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.\n\n\nMore Information\n----------------", "### Dataset Creation\n\n\nBlog post detailing the design choices during the dataset creation: URL", "### Technical Specifications\n\n\nBlog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL\n\n\nMore details on the architecture/optimizer: URL\n\n\nBlog post on the hardware/engineering side: URL\n\n\nDetails on the distributed setup used for the training: URL\n\n\nTensorboard updated during the training: URL\n\n\nInsights on how to approach training, negative results: URL\n\n\nDetails on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL", "### Initial Results\n\n\nInitial prompting experiments using interim checkpoints: URL\n\n\nModel Card Authors\n------------------\n\n\n*Ordered roughly chronologically and by amount of time spent.*\n\n\nMargaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff\n\n\nModel Card Contact\n------------------\n\n\nSend Questions to: bigscience-contact@URL" ]
text-classification
transformers
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 1.0682125091552734 f1_macro: 0.17334740509080254 f1_micro: 0.677115987460815 f1_weighted: 0.6445509958774286 precision_macro: 0.20939709069152226 precision_micro: 0.677115987460815 precision_weighted: 0.6403148729034246 recall_macro: 0.17545960483239412 recall_micro: 0.677115987460815 recall_weighted: 0.677115987460815 accuracy: 0.677115987460815
{"tags": ["autotrain", "text-classification"], "datasets": ["autotrain-produto-google-bert-base-uncased-9099/autotrain-data"], "widget": [{"text": "I love AutoTrain"}]}
KassioLima/autotrain-produto-google-bert-base-uncased-9099
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain", "dataset:autotrain-produto-google-bert-base-uncased-9099/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T22:27:56+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bert #text-classification #autotrain #dataset-autotrain-produto-google-bert-base-uncased-9099/autotrain-data #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 1.0682125091552734 f1_macro: 0.17334740509080254 f1_micro: 0.677115987460815 f1_weighted: 0.6445509958774286 precision_macro: 0.20939709069152226 precision_micro: 0.677115987460815 precision_weighted: 0.6403148729034246 recall_macro: 0.17545960483239412 recall_micro: 0.677115987460815 recall_weighted: 0.677115987460815 accuracy: 0.677115987460815
[ "# Model Trained Using AutoTrain\n\n- Problem type: Text Classification", "## Validation Metrics\nloss: 1.0682125091552734\n\nf1_macro: 0.17334740509080254\n\nf1_micro: 0.677115987460815\n\nf1_weighted: 0.6445509958774286\n\nprecision_macro: 0.20939709069152226\n\nprecision_micro: 0.677115987460815\n\nprecision_weighted: 0.6403148729034246\n\nrecall_macro: 0.17545960483239412\n\nrecall_micro: 0.677115987460815\n\nrecall_weighted: 0.677115987460815\n\naccuracy: 0.677115987460815" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #autotrain #dataset-autotrain-produto-google-bert-base-uncased-9099/autotrain-data #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoTrain\n\n- Problem type: Text Classification", "## Validation Metrics\nloss: 1.0682125091552734\n\nf1_macro: 0.17334740509080254\n\nf1_micro: 0.677115987460815\n\nf1_weighted: 0.6445509958774286\n\nprecision_macro: 0.20939709069152226\n\nprecision_micro: 0.677115987460815\n\nprecision_weighted: 0.6403148729034246\n\nrecall_macro: 0.17545960483239412\n\nrecall_micro: 0.677115987460815\n\nrecall_weighted: 0.677115987460815\n\naccuracy: 0.677115987460815" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Lakoc/ebranchformer_6_128h_for_pretraining
null
[ "transformers", "wav2vec2-ebranchformer", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T22:29:09+00:00
[ "1910.09700" ]
[]
TAGS #transformers #wav2vec2-ebranchformer #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #wav2vec2-ebranchformer #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# TenyxChat: Language Model Alignment using Tenyx Fine-tuning Introducing Llama-3-TenyxChat-70B, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's advanced fine-tuning technology ([VentureBeat article](https://venturebeat.com/ai/tenyx-aims-to-fix-llms-catastrophic-forgetting-problem/)). Our model is trained using the [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) framework on the open-source AI feedback dataset [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized). We fine-tune [Llama3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) with our proprietary approach which shows an increase in [MT-Bench](https://arxiv.org/abs/2306.05685)*, without a drop in performance of the model on other benchmarks. Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner, thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution. Llama-3-TenyxChat-70B was trained using eight A100s (80GB) for fifteen hours, with a training setup obtained from HuggingFaceH4 ([GitHub](https://github.com/huggingface/alignment-handbook)). *The MT-Bench evaluation we perform follows the latest eval upgrade as PR'd [here](https://github.com/lm-sys/FastChat/pull/3158). This PR upgrades the evaluation from `GPT-4-0613` to `GPT-4-preview-0125` (latest version) as well as corrects and improves the quality of the reference answers for a subset of questions. These changes are required to correct the erroneous rating during previous evaluation. **Model Developers** [Tenyx Research](https://www.tenyx.com/research) # Model details - Model type: Fine-tuned 70B Instruct model for chat. - License: Meta Llama 3 Community License - Base model: [Llama3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) - Demo: [HuggingFace Space](https://huggingface.co/spaces/tenyx/Llama3-TenyxChat-70B) ## Usage Our model uses the same chat template as [Llama3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). ### Hugging face Example ```python import torch from transformers import pipeline pipe = pipeline("text-generation", model="tenyx/Llama3-TenyxChat-70B", torch_dtype=torch.bfloat16, device_map="auto") messages = [ {"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."}, {"role": "user", "content": "Hi. I would like to make a hotel booking."}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=512, do_sample=False) ``` # Performance At the time of release (April 2024), Llama3-TenyxChat-70B is the highest-ranked open source model on the MT-Bench evaluation available for download. ## MT-Bench MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using `GPT-4-preview-0125` on a scale of 1 to 10, with higher values corresponding to better responses. | Model-name | GPT4-preview-0125 MT Bench | Chat Arena Elo | |--------------------------------|----------------------------|----------------| | GPT-4-1106 | 8.79 | 1251 | | Claude 3 Opus (20240229) | 8.57 | 1247 | | **Llama3-TenyxChat-70B** |**8.15** | NA | | *Llama3-70B-Instruct* | 7.96 | 1207 | | Claude 3 Sonnet (20240229) | 7.82 | 1190 | | GPT-4-0314 | 7.96 | 1185 | | Mixtral | 7.38 | 1114 | | gpt-3.5-turbo-0613 | 7.37 | 1113 | | Yi-34B | 6.46 | 1099 | | gpt-3.5-turbo-0125 | 7.52 | 1096 | | Llama 2 70B | 6.01 | 1082 | | NV-Llama2-70B-SteerLM-Chat | 6.57 | 1076 | ![hexplot.png](hexplot_llama3-tenyxchat-70b.png) ## Arena Hard Arena-Hard is an evaluation tool for instruction-tuned LLMs containing 500 challenging user queries. They prompt GPT-4-1106-preview as judge to compare the models' responses against a baseline model (default: GPT-4-0314). | Model-name | Score | | |--------------------------------|--------|---------------------| | gpt-4-0125-preview | 78.0 | 95% CI: (-1.8, 2.2) | | claude-3-opus-20240229 | 60.4 | 95% CI: (-2.6, 2.1) | | gpt-4-0314 | 50.0 | 95% CI: (0.0, 0.0) | | **tenyx/Llama3-TenyxChat-70B** | **49.0** | 95% CI: (-3.0, 2.4) | | *meta-llama/Meta-Llama-3-70B-In* | 47.3 | 95% CI: (-1.7, 2.6) | | claude-3-sonnet-20240229 | 46.8 | 95% CI: (-2.7, 2.3) | | claude-3-haiku-20240307 | 41.5 | 95% CI: (-2.4, 2.5) | | gpt-4-0613 | 37.9 | 95% CI: (-2.1, 2.2) | | mistral-large-2402 | 37.7 | 95% CI: (-2.9, 2.8) | | Qwen1.5-72B-Chat | 36.1 | 95% CI: (-2.1, 2.4) | | command-r-plus | 33.1 | 95% CI: (-2.0, 1.9) | ## Open LLM Leaderboard Evaluation We now present our results on the [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) used for benchmarking Open LLM Leaderboard on Hugging Face. The task involves evaluation on `6` key benchmarks across reasoning and knowledge with different *few-shot* settings. Read more details about the benchmark at [the leaderboard page](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model-name | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | --- | | **Llama3-TenyxChat-70B** | **79.43** | 72.53 | 86.11 | 79.95 | 62.93 | 83.82 | 91.21 | | *Llama3-70B-Instruct* | 77.88 | 71.42 | 85.69 | 80.06 | 61.81 | 82.87 | 85.44 | *The results reported are from local evaluation of our model. `tenyx/Llama3-TenyxChat-70B` is submitted and will be reflected in the leaderboard once evaluation succeeds. # Limitations Llama3-TenyxChat-70B, like other language models, has its own set of limitations. We haven’t fine-tuned the model explicitly to align with **human** safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content. # License Llama3-TenyxChat-70B is distributed under the Meta Llama 3 Community License. # Citation If you use Llama3-TenyxChat-70B for your research, cite us as ``` @misc{tenyxchat2024, title={TenyxChat: Language Model Alignment using Tenyx Fine-tuning}, author={Tenyx}, year={2024}, } ```
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["tenyx-fine-tuning", "dpo", "tenyxchat", "llama3"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "pipeline_tag": "text-generation"}
tenyx/Llama3-TenyxChat-70B
null
[ "transformers", "safetensors", "llama", "text-generation", "tenyx-fine-tuning", "dpo", "tenyxchat", "llama3", "conversational", "en", "dataset:HuggingFaceH4/ultrafeedback_binarized", "arxiv:2305.18290", "arxiv:2306.05685", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "has_space" ]
null
2024-04-26T22:31:07+00:00
[ "2305.18290", "2306.05685" ]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #tenyx-fine-tuning #dpo #tenyxchat #llama3 #conversational #en #dataset-HuggingFaceH4/ultrafeedback_binarized #arxiv-2305.18290 #arxiv-2306.05685 #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #has_space
TenyxChat: Language Model Alignment using Tenyx Fine-tuning =========================================================== Introducing Llama-3-TenyxChat-70B, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's advanced fine-tuning technology (VentureBeat article). Our model is trained using the Direct Preference Optimization (DPO) framework on the open-source AI feedback dataset UltraFeedback. We fine-tune Llama3-70B with our proprietary approach which shows an increase in MT-Bench\*, without a drop in performance of the model on other benchmarks. Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner, thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution. Llama-3-TenyxChat-70B was trained using eight A100s (80GB) for fifteen hours, with a training setup obtained from HuggingFaceH4 (GitHub). \*The MT-Bench evaluation we perform follows the latest eval upgrade as PR'd here. This PR upgrades the evaluation from 'GPT-4-0613' to 'GPT-4-preview-0125' (latest version) as well as corrects and improves the quality of the reference answers for a subset of questions. These changes are required to correct the erroneous rating during previous evaluation. Model Developers Tenyx Research Model details ============= * Model type: Fine-tuned 70B Instruct model for chat. * License: Meta Llama 3 Community License * Base model: Llama3-70B * Demo: HuggingFace Space Usage ----- Our model uses the same chat template as Llama3-70B. ### Hugging face Example Performance =========== At the time of release (April 2024), Llama3-TenyxChat-70B is the highest-ranked open source model on the MT-Bench evaluation available for download. MT-Bench -------- MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using 'GPT-4-preview-0125' on a scale of 1 to 10, with higher values corresponding to better responses. Model-name: GPT-4-1106, GPT4-preview-0125 MT Bench: 8.79, Chat Arena Elo: 1251 Model-name: Claude 3 Opus (20240229), GPT4-preview-0125 MT Bench: 8.57, Chat Arena Elo: 1247 Model-name: Llama3-TenyxChat-70B, GPT4-preview-0125 MT Bench: 8.15, Chat Arena Elo: NA Model-name: *Llama3-70B-Instruct*, GPT4-preview-0125 MT Bench: 7.96, Chat Arena Elo: 1207 Model-name: Claude 3 Sonnet (20240229), GPT4-preview-0125 MT Bench: 7.82, Chat Arena Elo: 1190 Model-name: GPT-4-0314, GPT4-preview-0125 MT Bench: 7.96, Chat Arena Elo: 1185 Model-name: Mixtral, GPT4-preview-0125 MT Bench: 7.38, Chat Arena Elo: 1114 Model-name: gpt-3.5-turbo-0613, GPT4-preview-0125 MT Bench: 7.37, Chat Arena Elo: 1113 Model-name: Yi-34B, GPT4-preview-0125 MT Bench: 6.46, Chat Arena Elo: 1099 Model-name: gpt-3.5-turbo-0125, GPT4-preview-0125 MT Bench: 7.52, Chat Arena Elo: 1096 Model-name: Llama 2 70B, GPT4-preview-0125 MT Bench: 6.01, Chat Arena Elo: 1082 Model-name: NV-Llama2-70B-SteerLM-Chat, GPT4-preview-0125 MT Bench: 6.57, Chat Arena Elo: 1076 !URL Arena Hard ---------- Arena-Hard is an evaluation tool for instruction-tuned LLMs containing 500 challenging user queries. They prompt GPT-4-1106-preview as judge to compare the models' responses against a baseline model (default: GPT-4-0314). Model-name: gpt-4-0125-preview, Score: 78.0 Model-name: claude-3-opus-20240229, Score: 60.4 Model-name: gpt-4-0314, Score: 50.0 Model-name: tenyx/Llama3-TenyxChat-70B, Score: 49.0 Model-name: *meta-llama/Meta-Llama-3-70B-In*, Score: 47.3 Model-name: claude-3-sonnet-20240229, Score: 46.8 Model-name: claude-3-haiku-20240307, Score: 41.5 Model-name: gpt-4-0613, Score: 37.9 Model-name: mistral-large-2402, Score: 37.7 Model-name: Qwen1.5-72B-Chat, Score: 36.1 Model-name: command-r-plus, Score: 33.1 Open LLM Leaderboard Evaluation ------------------------------- We now present our results on the Eleuther AI Language Model Evaluation Harness used for benchmarking Open LLM Leaderboard on Hugging Face. The task involves evaluation on '6' key benchmarks across reasoning and knowledge with different *few-shot* settings. Read more details about the benchmark at the leaderboard page. \*The results reported are from local evaluation of our model. 'tenyx/Llama3-TenyxChat-70B' is submitted and will be reflected in the leaderboard once evaluation succeeds. Limitations =========== Llama3-TenyxChat-70B, like other language models, has its own set of limitations. We haven’t fine-tuned the model explicitly to align with human safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content. License ======= Llama3-TenyxChat-70B is distributed under the Meta Llama 3 Community License. If you use Llama3-TenyxChat-70B for your research, cite us as
[ "### Hugging face Example\n\n\nPerformance\n===========\n\n\nAt the time of release (April 2024), Llama3-TenyxChat-70B is the highest-ranked open source model on the MT-Bench evaluation available for download.\n\n\nMT-Bench\n--------\n\n\nMT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using 'GPT-4-preview-0125' on a scale of 1 to 10, with higher values corresponding to better responses.\n\n\nModel-name: GPT-4-1106, GPT4-preview-0125 MT Bench: 8.79, Chat Arena Elo: 1251\nModel-name: Claude 3 Opus (20240229), GPT4-preview-0125 MT Bench: 8.57, Chat Arena Elo: 1247\nModel-name: Llama3-TenyxChat-70B, GPT4-preview-0125 MT Bench: 8.15, Chat Arena Elo: NA\nModel-name: *Llama3-70B-Instruct*, GPT4-preview-0125 MT Bench: 7.96, Chat Arena Elo: 1207\nModel-name: Claude 3 Sonnet (20240229), GPT4-preview-0125 MT Bench: 7.82, Chat Arena Elo: 1190\nModel-name: GPT-4-0314, GPT4-preview-0125 MT Bench: 7.96, Chat Arena Elo: 1185\nModel-name: Mixtral, GPT4-preview-0125 MT Bench: 7.38, Chat Arena Elo: 1114\nModel-name: gpt-3.5-turbo-0613, GPT4-preview-0125 MT Bench: 7.37, Chat Arena Elo: 1113\nModel-name: Yi-34B, GPT4-preview-0125 MT Bench: 6.46, Chat Arena Elo: 1099\nModel-name: gpt-3.5-turbo-0125, GPT4-preview-0125 MT Bench: 7.52, Chat Arena Elo: 1096\nModel-name: Llama 2 70B, GPT4-preview-0125 MT Bench: 6.01, Chat Arena Elo: 1082\nModel-name: NV-Llama2-70B-SteerLM-Chat, GPT4-preview-0125 MT Bench: 6.57, Chat Arena Elo: 1076\n\n\n!URL\n\n\nArena Hard\n----------\n\n\nArena-Hard is an evaluation tool for instruction-tuned LLMs containing 500 challenging user queries. They prompt GPT-4-1106-preview as judge to compare the models' responses against a baseline model (default: GPT-4-0314).\n\n\nModel-name: gpt-4-0125-preview, Score: 78.0\nModel-name: claude-3-opus-20240229, Score: 60.4\nModel-name: gpt-4-0314, Score: 50.0\nModel-name: tenyx/Llama3-TenyxChat-70B, Score: 49.0\nModel-name: *meta-llama/Meta-Llama-3-70B-In*, Score: 47.3\nModel-name: claude-3-sonnet-20240229, Score: 46.8\nModel-name: claude-3-haiku-20240307, Score: 41.5\nModel-name: gpt-4-0613, Score: 37.9\nModel-name: mistral-large-2402, Score: 37.7\nModel-name: Qwen1.5-72B-Chat, Score: 36.1\nModel-name: command-r-plus, Score: 33.1\n\n\nOpen LLM Leaderboard Evaluation\n-------------------------------\n\n\nWe now present our results on the Eleuther AI Language Model Evaluation Harness used for benchmarking Open LLM Leaderboard on Hugging Face.\nThe task involves evaluation on '6' key benchmarks across reasoning and knowledge with different *few-shot* settings. Read more details about the benchmark at the leaderboard page.\n\n\n\n\\*The results reported are from local evaluation of our model. 'tenyx/Llama3-TenyxChat-70B' is submitted and will be reflected in the leaderboard once evaluation succeeds.\n\n\nLimitations\n===========\n\n\nLlama3-TenyxChat-70B, like other language models, has its own set of limitations. We haven’t fine-tuned the model explicitly to align with human safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.\n\n\nLicense\n=======\n\n\nLlama3-TenyxChat-70B is distributed under the Meta Llama 3 Community License.\n\n\nIf you use Llama3-TenyxChat-70B for your research, cite us as" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #tenyx-fine-tuning #dpo #tenyxchat #llama3 #conversational #en #dataset-HuggingFaceH4/ultrafeedback_binarized #arxiv-2305.18290 #arxiv-2306.05685 #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #has_space \n", "### Hugging face Example\n\n\nPerformance\n===========\n\n\nAt the time of release (April 2024), Llama3-TenyxChat-70B is the highest-ranked open source model on the MT-Bench evaluation available for download.\n\n\nMT-Bench\n--------\n\n\nMT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using 'GPT-4-preview-0125' on a scale of 1 to 10, with higher values corresponding to better responses.\n\n\nModel-name: GPT-4-1106, GPT4-preview-0125 MT Bench: 8.79, Chat Arena Elo: 1251\nModel-name: Claude 3 Opus (20240229), GPT4-preview-0125 MT Bench: 8.57, Chat Arena Elo: 1247\nModel-name: Llama3-TenyxChat-70B, GPT4-preview-0125 MT Bench: 8.15, Chat Arena Elo: NA\nModel-name: *Llama3-70B-Instruct*, GPT4-preview-0125 MT Bench: 7.96, Chat Arena Elo: 1207\nModel-name: Claude 3 Sonnet (20240229), GPT4-preview-0125 MT Bench: 7.82, Chat Arena Elo: 1190\nModel-name: GPT-4-0314, GPT4-preview-0125 MT Bench: 7.96, Chat Arena Elo: 1185\nModel-name: Mixtral, GPT4-preview-0125 MT Bench: 7.38, Chat Arena Elo: 1114\nModel-name: gpt-3.5-turbo-0613, GPT4-preview-0125 MT Bench: 7.37, Chat Arena Elo: 1113\nModel-name: Yi-34B, GPT4-preview-0125 MT Bench: 6.46, Chat Arena Elo: 1099\nModel-name: gpt-3.5-turbo-0125, GPT4-preview-0125 MT Bench: 7.52, Chat Arena Elo: 1096\nModel-name: Llama 2 70B, GPT4-preview-0125 MT Bench: 6.01, Chat Arena Elo: 1082\nModel-name: NV-Llama2-70B-SteerLM-Chat, GPT4-preview-0125 MT Bench: 6.57, Chat Arena Elo: 1076\n\n\n!URL\n\n\nArena Hard\n----------\n\n\nArena-Hard is an evaluation tool for instruction-tuned LLMs containing 500 challenging user queries. They prompt GPT-4-1106-preview as judge to compare the models' responses against a baseline model (default: GPT-4-0314).\n\n\nModel-name: gpt-4-0125-preview, Score: 78.0\nModel-name: claude-3-opus-20240229, Score: 60.4\nModel-name: gpt-4-0314, Score: 50.0\nModel-name: tenyx/Llama3-TenyxChat-70B, Score: 49.0\nModel-name: *meta-llama/Meta-Llama-3-70B-In*, Score: 47.3\nModel-name: claude-3-sonnet-20240229, Score: 46.8\nModel-name: claude-3-haiku-20240307, Score: 41.5\nModel-name: gpt-4-0613, Score: 37.9\nModel-name: mistral-large-2402, Score: 37.7\nModel-name: Qwen1.5-72B-Chat, Score: 36.1\nModel-name: command-r-plus, Score: 33.1\n\n\nOpen LLM Leaderboard Evaluation\n-------------------------------\n\n\nWe now present our results on the Eleuther AI Language Model Evaluation Harness used for benchmarking Open LLM Leaderboard on Hugging Face.\nThe task involves evaluation on '6' key benchmarks across reasoning and knowledge with different *few-shot* settings. Read more details about the benchmark at the leaderboard page.\n\n\n\n\\*The results reported are from local evaluation of our model. 'tenyx/Llama3-TenyxChat-70B' is submitted and will be reflected in the leaderboard once evaluation succeeds.\n\n\nLimitations\n===========\n\n\nLlama3-TenyxChat-70B, like other language models, has its own set of limitations. We haven’t fine-tuned the model explicitly to align with human safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.\n\n\nLicense\n=======\n\n\nLlama3-TenyxChat-70B is distributed under the Meta Llama 3 Community License.\n\n\nIf you use Llama3-TenyxChat-70B for your research, cite us as" ]
text-generation
transformers
# mlx-community/Swallow-70b-instruct-v0.1-4bit This model was converted to MLX format from [`tokyotech-llm/Swallow-70b-instruct-v0.1`]() using mlx-lm version **0.6.0**. Refer to the [original model card](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v0.1) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Swallow-70b-instruct-v0.1-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en", "ja"], "license": "llama2", "library_name": "transformers", "tags": ["mlx"], "pipeline_tag": "text-generation", "model_type": "llama"}
mlx-community/Swallow-70b-instruct-v0.1-4bit
null
[ "transformers", "safetensors", "llama", "text-generation", "mlx", "conversational", "en", "ja", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T22:31:31+00:00
[]
[ "en", "ja" ]
TAGS #transformers #safetensors #llama #text-generation #mlx #conversational #en #ja #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# mlx-community/Swallow-70b-instruct-v0.1-4bit This model was converted to MLX format from ['tokyotech-llm/Swallow-70b-instruct-v0.1']() using mlx-lm version 0.6.0. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/Swallow-70b-instruct-v0.1-4bit\nThis model was converted to MLX format from ['tokyotech-llm/Swallow-70b-instruct-v0.1']() using mlx-lm version 0.6.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mlx #conversational #en #ja #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# mlx-community/Swallow-70b-instruct-v0.1-4bit\nThis model was converted to MLX format from ['tokyotech-llm/Swallow-70b-instruct-v0.1']() using mlx-lm version 0.6.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_Instruction0_OSAPL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_OSAPL_v1", "results": []}]}
ThuyNT/CS505_COQE_viT5_train_Instruction0_OSAPL_v1
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T22:32:09+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# CS505_COQE_viT5_train_Instruction0_OSAPL_v1 This model is a fine-tuned version of VietAI/vit5-large on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# CS505_COQE_viT5_train_Instruction0_OSAPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# CS505_COQE_viT5_train_Instruction0_OSAPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-dpo-qlora This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-qlora](https://huggingface.co/alignment-handbook/zephyr-7b-sft-qlora) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.5018 - Rewards/chosen: -2.1482 - Rewards/rejected: -3.1540 - Rewards/accuracies: 0.7590 - Rewards/margins: 1.0058 - Logps/rejected: -556.6644 - Logps/chosen: -480.1277 - Logits/rejected: -1.2931 - Logits/chosen: -1.3827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6635 | 0.0523 | 100 | 0.6640 | 0.0129 | -0.0631 | 0.6830 | 0.0761 | -247.5831 | -264.0175 | -2.0469 | -2.1424 | | 0.6119 | 0.1047 | 200 | 0.6207 | -0.5556 | -0.8212 | 0.6790 | 0.2657 | -323.3911 | -320.8676 | -1.9524 | -2.0449 | | 0.5874 | 0.1570 | 300 | 0.5849 | -0.4240 | -0.8044 | 0.7000 | 0.3804 | -321.7128 | -307.7115 | -1.9609 | -2.0494 | | 0.5608 | 0.2094 | 400 | 0.5607 | -1.1817 | -1.7752 | 0.7290 | 0.5935 | -418.7894 | -383.4811 | -1.6969 | -1.7823 | | 0.5287 | 0.2617 | 500 | 0.5434 | -1.7248 | -2.4550 | 0.7250 | 0.7303 | -486.7726 | -437.7878 | -1.5394 | -1.6284 | | 0.5504 | 0.3141 | 600 | 0.5278 | -1.3541 | -2.1302 | 0.7370 | 0.7761 | -454.2872 | -400.7156 | -1.4439 | -1.5287 | | 0.5243 | 0.3664 | 700 | 0.5278 | -0.9934 | -1.7415 | 0.7420 | 0.7481 | -415.4179 | -364.6462 | -1.4888 | -1.5754 | | 0.5346 | 0.4187 | 800 | 0.5285 | -1.0509 | -1.8191 | 0.7360 | 0.7681 | -423.1764 | -370.4044 | -1.4861 | -1.5718 | | 0.5072 | 0.4711 | 900 | 0.5197 | -1.6324 | -2.5736 | 0.7300 | 0.9412 | -498.6239 | -428.5474 | -1.3651 | -1.4531 | | 0.5023 | 0.5234 | 1000 | 0.5158 | -1.6927 | -2.6755 | 0.7460 | 0.9828 | -508.8179 | -434.5808 | -1.2853 | -1.3779 | | 0.4954 | 0.5758 | 1100 | 0.5126 | -1.4605 | -2.3370 | 0.7480 | 0.8765 | -474.9688 | -411.3603 | -1.3921 | -1.4843 | | 0.4983 | 0.6281 | 1200 | 0.5105 | -2.0566 | -3.0678 | 0.7450 | 1.0112 | -548.0505 | -470.9687 | -1.1942 | -1.2848 | | 0.4774 | 0.6805 | 1300 | 0.5093 | -1.9802 | -3.0112 | 0.7510 | 1.0311 | -542.3931 | -463.3254 | -1.2574 | -1.3491 | | 0.4516 | 0.7328 | 1400 | 0.5058 | -2.1539 | -3.2003 | 0.7530 | 1.0464 | -561.2969 | -480.7002 | -1.2592 | -1.3500 | | 0.4758 | 0.7851 | 1500 | 0.5018 | -2.2342 | -3.2427 | 0.7550 | 1.0085 | -565.5339 | -488.7257 | -1.2803 | -1.3710 | | 0.4967 | 0.8375 | 1600 | 0.5019 | -2.1690 | -3.1744 | 0.7590 | 1.0054 | -558.7111 | -482.2090 | -1.2939 | -1.3837 | | 0.4769 | 0.8898 | 1700 | 0.5018 | -2.1431 | -3.1460 | 0.7600 | 1.0029 | -555.8691 | -479.6245 | -1.2936 | -1.3834 | | 0.4843 | 0.9422 | 1800 | 0.5019 | -2.1475 | -3.1534 | 0.7580 | 1.0059 | -556.6094 | -480.0620 | -1.2932 | -1.3829 | | 0.5048 | 0.9945 | 1900 | 0.5019 | -2.1484 | -3.1540 | 0.7590 | 1.0056 | -556.6639 | -480.1491 | -1.2933 | -1.3829 | ### Framework versions - PEFT 0.7.1 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "zephyr-7b-dpo-qlora", "results": []}]}
chrlu/zephyr-7b-dpo-qlora
null
[ "peft", "tensorboard", "safetensors", "mistral", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "4-bit", "region:us" ]
null
2024-04-26T22:32:54+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #mistral #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #4-bit #region-us
zephyr-7b-dpo-qlora =================== This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-qlora on the HuggingFaceH4/ultrafeedback\_binarized dataset. It achieves the following results on the evaluation set: * Loss: 0.5018 * Rewards/chosen: -2.1482 * Rewards/rejected: -3.1540 * Rewards/accuracies: 0.7590 * Rewards/margins: 1.0058 * Logps/rejected: -556.6644 * Logps/chosen: -480.1277 * Logits/rejected: -1.2931 * Logits/chosen: -1.3827 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-06 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 2 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * total\_eval\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.7.1 * Transformers 4.40.1 * Pytorch 2.1.2 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.1\n* Pytorch 2.1.2\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #mistral #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #4-bit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.1\n* Pytorch 2.1.2\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
devesh220897/financial-chatbot-for-young-adults
null
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T22:33:18+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_Instruction0_PSOAL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_PSOAL_v1", "results": []}]}
ThuyNT/CS505_COQE_viT5_train_Instruction0_PSOAL_v1
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T22:35:05+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# CS505_COQE_viT5_train_Instruction0_PSOAL_v1 This model is a fine-tuned version of VietAI/vit5-large on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# CS505_COQE_viT5_train_Instruction0_PSOAL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# CS505_COQE_viT5_train_Instruction0_PSOAL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Keiana-L3-Test6-8B-16 Keiana-L3-Test6-8B-16 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): # Keep in mind that, this merged model isn't usually tested at the moment, which could benefit in vocabulary error. * [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1) * [Kaoeiri/Keiana-L3-Test4.7-8B-3](https://huggingface.co/Kaoeiri/Keiana-L3-Test4.7-8B-3) ## 🧩 Configuration ```yaml merge_method: model_stock dtype: float16 base_model: Kaoeiri/Keiana-L3-Test5.75-8B-13.5 models: - model: Sao10K/L3-Solana-8B-v1 parameters: weight: .2725 density: .385 - model: Kaoeiri/Keiana-L3-Test4.7-8B-3 parameters: weight: .2 density: .56 parameters: int8_mask: true ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kaoeiri/Keiana-L3-Test6-8B-16" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "Sao10K/L3-Solana-8B-v1", "Kaoeiri/Keiana-L3-Test4.7-8B-3"], "base_model": ["Sao10K/L3-Solana-8B-v1", "Kaoeiri/Keiana-L3-Test4.7-8B-3"]}
Kaoeiri/Keiana-L3-Test6-8B-16
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "Sao10K/L3-Solana-8B-v1", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "conversational", "base_model:Sao10K/L3-Solana-8B-v1", "base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T22:35:55+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Sao10K/L3-Solana-8B-v1 #Kaoeiri/Keiana-L3-Test4.7-8B-3 #conversational #base_model-Sao10K/L3-Solana-8B-v1 #base_model-Kaoeiri/Keiana-L3-Test4.7-8B-3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Keiana-L3-Test6-8B-16 Keiana-L3-Test6-8B-16 is a merge of the following models using LazyMergekit: # Keep in mind that, this merged model isn't usually tested at the moment, which could benefit in vocabulary error. * Sao10K/L3-Solana-8B-v1 * Kaoeiri/Keiana-L3-Test4.7-8B-3 ## Configuration ## Usage
[ "# Keiana-L3-Test6-8B-16\n\nKeiana-L3-Test6-8B-16 is a merge of the following models using LazyMergekit:", "# Keep in mind that, this merged model isn't usually tested at the moment, which could benefit in vocabulary error.\n* Sao10K/L3-Solana-8B-v1\n* Kaoeiri/Keiana-L3-Test4.7-8B-3", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Sao10K/L3-Solana-8B-v1 #Kaoeiri/Keiana-L3-Test4.7-8B-3 #conversational #base_model-Sao10K/L3-Solana-8B-v1 #base_model-Kaoeiri/Keiana-L3-Test4.7-8B-3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Keiana-L3-Test6-8B-16\n\nKeiana-L3-Test6-8B-16 is a merge of the following models using LazyMergekit:", "# Keep in mind that, this merged model isn't usually tested at the moment, which could benefit in vocabulary error.\n* Sao10K/L3-Solana-8B-v1\n* Kaoeiri/Keiana-L3-Test4.7-8B-3", "## Configuration", "## Usage" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) * [o2satz/L3_med16QA](https://huggingface.co/o2satz/L3_med16QA) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: cognitivecomputations/dolphin-2.9-llama3-8b layer_range: - 0 - 32 - model: o2satz/L3_med16QA layer_range: - 0 - 32 merge_method: slerp base_model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: t: - filter: self_attn value: - 0 - 0.5 - 0.3 - 0.7 - 1 - filter: mlp value: - 1 - 0.5 - 0.7 - 0.3 - 0 - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["cognitivecomputations/dolphin-2.9-llama3-8b", "o2satz/L3_med16QA"]}
o2satz/WS_med_QA_Dolphin
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:cognitivecomputations/dolphin-2.9-llama3-8b", "base_model:o2satz/L3_med16QA", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T22:36:35+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-cognitivecomputations/dolphin-2.9-llama3-8b #base_model-o2satz/L3_med16QA #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * cognitivecomputations/dolphin-2.9-llama3-8b * o2satz/L3_med16QA ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* cognitivecomputations/dolphin-2.9-llama3-8b\n* o2satz/L3_med16QA", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-cognitivecomputations/dolphin-2.9-llama3-8b #base_model-o2satz/L3_med16QA #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* cognitivecomputations/dolphin-2.9-llama3-8b\n* o2satz/L3_med16QA", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
reinforcement-learning
ml-agents
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: nvasko/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos"]}
nvasko/poca-SoccerTwos
null
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
null
2024-04-26T22:37:31+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #SoccerTwos #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SoccerTwos #region-us
# poca Agent playing SoccerTwos This is a trained model of a poca agent playing SoccerTwos using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: nvasko/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# poca Agent playing SoccerTwos\n This is a trained model of a poca agent playing SoccerTwos\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: nvasko/poca-SoccerTwos\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #SoccerTwos #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SoccerTwos #region-us \n", "# poca Agent playing SoccerTwos\n This is a trained model of a poca agent playing SoccerTwos\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: nvasko/poca-SoccerTwos\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-brain-xray This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sartajbhuvaji/Brain-Tumor-Classification dataset. It achieves the following results on the evaluation set: - Loss: 0.9079 - Accuracy: 0.6904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.2478 | 0.5556 | 100 | 0.9079 | 0.6904 | | 0.1499 | 1.1111 | 200 | 1.1543 | 0.7183 | | 0.0872 | 1.6667 | 300 | 1.1469 | 0.7614 | | 0.0118 | 2.2222 | 400 | 1.2361 | 0.7259 | | 0.0077 | 2.7778 | 500 | 1.2023 | 0.7665 | | 0.0057 | 3.3333 | 600 | 1.2470 | 0.7640 | | 0.0053 | 3.8889 | 700 | 1.2096 | 0.7766 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "vit-base-brain-xray", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "sartajbhuvaji/Brain-Tumor-Classification", "type": "imagefolder", "config": "default", "split": "Testing", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6903553299492385, "name": "Accuracy"}]}]}]}
abdulelahagr/vit-base-brain-xray
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T22:39:07+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-imagefolder #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
vit-base-brain-xray =================== This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the sartajbhuvaji/Brain-Tumor-Classification dataset. It achieves the following results on the evaluation set: * Loss: 0.9079 * Accuracy: 0.6904 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 4 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-imagefolder #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Uploaded model - **Developed by:** mahiatlinux - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
mahiatlinux/MasherAI-7B-v6.2-test2
null
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T22:39:31+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: mahiatlinux - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: mahiatlinux\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: mahiatlinux\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K79me3-seqsight_4096_512_46M-L1_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4153 - F1 Score: 0.8196 - Accuracy: 0.8197 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4799 | 1.1 | 200 | 0.4446 | 0.8177 | 0.8176 | | 0.4462 | 2.21 | 400 | 0.4431 | 0.8016 | 0.8037 | | 0.4375 | 3.31 | 600 | 0.4297 | 0.8090 | 0.8103 | | 0.4268 | 4.42 | 800 | 0.4235 | 0.8155 | 0.8166 | | 0.4258 | 5.52 | 1000 | 0.4296 | 0.8065 | 0.8086 | | 0.4183 | 6.63 | 1200 | 0.4304 | 0.8095 | 0.8114 | | 0.4204 | 7.73 | 1400 | 0.4159 | 0.8174 | 0.8183 | | 0.4136 | 8.84 | 1600 | 0.4270 | 0.8088 | 0.8107 | | 0.4116 | 9.94 | 1800 | 0.4122 | 0.8254 | 0.8252 | | 0.4059 | 11.05 | 2000 | 0.4211 | 0.8145 | 0.8155 | | 0.4079 | 12.15 | 2200 | 0.4105 | 0.8251 | 0.8252 | | 0.3999 | 13.26 | 2400 | 0.4121 | 0.8215 | 0.8221 | | 0.4004 | 14.36 | 2600 | 0.4111 | 0.8208 | 0.8211 | | 0.4 | 15.47 | 2800 | 0.4195 | 0.8135 | 0.8148 | | 0.3957 | 16.57 | 3000 | 0.4134 | 0.8212 | 0.8211 | | 0.3955 | 17.68 | 3200 | 0.4111 | 0.8225 | 0.8228 | | 0.3926 | 18.78 | 3400 | 0.4149 | 0.8223 | 0.8228 | | 0.3896 | 19.89 | 3600 | 0.4149 | 0.8216 | 0.8221 | | 0.3921 | 20.99 | 3800 | 0.4159 | 0.8204 | 0.8211 | | 0.3895 | 22.1 | 4000 | 0.4121 | 0.8194 | 0.8200 | | 0.3857 | 23.2 | 4200 | 0.4133 | 0.8213 | 0.8218 | | 0.3866 | 24.31 | 4400 | 0.4180 | 0.8206 | 0.8214 | | 0.3797 | 25.41 | 4600 | 0.4145 | 0.8245 | 0.8249 | | 0.385 | 26.52 | 4800 | 0.4160 | 0.8230 | 0.8235 | | 0.384 | 27.62 | 5000 | 0.4144 | 0.8237 | 0.8242 | | 0.3796 | 28.73 | 5200 | 0.4158 | 0.8176 | 0.8187 | | 0.3772 | 29.83 | 5400 | 0.4124 | 0.8267 | 0.8270 | | 0.3771 | 30.94 | 5600 | 0.4157 | 0.8279 | 0.8280 | | 0.377 | 32.04 | 5800 | 0.4146 | 0.8297 | 0.8298 | | 0.3779 | 33.15 | 6000 | 0.4135 | 0.8277 | 0.8280 | | 0.3741 | 34.25 | 6200 | 0.4180 | 0.8259 | 0.8263 | | 0.3733 | 35.36 | 6400 | 0.4232 | 0.8240 | 0.8245 | | 0.3751 | 36.46 | 6600 | 0.4161 | 0.8254 | 0.8256 | | 0.3729 | 37.57 | 6800 | 0.4187 | 0.8231 | 0.8239 | | 0.3732 | 38.67 | 7000 | 0.4192 | 0.8252 | 0.8256 | | 0.369 | 39.78 | 7200 | 0.4170 | 0.8283 | 0.8287 | | 0.3711 | 40.88 | 7400 | 0.4170 | 0.8252 | 0.8256 | | 0.3687 | 41.99 | 7600 | 0.4171 | 0.8256 | 0.8259 | | 0.3679 | 43.09 | 7800 | 0.4220 | 0.8237 | 0.8242 | | 0.3711 | 44.2 | 8000 | 0.4207 | 0.8236 | 0.8242 | | 0.3666 | 45.3 | 8200 | 0.4185 | 0.8277 | 0.8280 | | 0.3659 | 46.41 | 8400 | 0.4203 | 0.8268 | 0.8270 | | 0.3707 | 47.51 | 8600 | 0.4211 | 0.8247 | 0.8252 | | 0.3634 | 48.62 | 8800 | 0.4217 | 0.8257 | 0.8263 | | 0.3632 | 49.72 | 9000 | 0.4220 | 0.8263 | 0.8266 | | 0.367 | 50.83 | 9200 | 0.4223 | 0.8242 | 0.8249 | | 0.3636 | 51.93 | 9400 | 0.4223 | 0.8247 | 0.8252 | | 0.3653 | 53.04 | 9600 | 0.4199 | 0.8277 | 0.8280 | | 0.3633 | 54.14 | 9800 | 0.4204 | 0.8277 | 0.8280 | | 0.3624 | 55.25 | 10000 | 0.4216 | 0.8262 | 0.8266 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_4096_512_46M-L1_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_4096_512_46M-L1_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T22:40:42+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H3K79me3-seqsight\_4096\_512\_46M-L1\_f ================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset. It achieves the following results on the evaluation set: * Loss: 0.4153 * F1 Score: 0.8196 * Accuracy: 0.8197 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
null
# Dolphin 2.9 Llama 3 70b 🐬 (GGUF) Quantized and converted to GGUF for Ollama. ## Example Modelfile ``` FROM ./dolphin-2.9-llama3-70b-Q6_K.gguf TEMPLATE """{{ if .System }}<|im_start|>system {{ .System }}<|im_end|> {{ end }}{{ if .Prompt }}<|im_start|>user {{ .Prompt }}<|im_end|> {{ end }}<|im_start|>assistant {{ .Response }}<|im_end|> """ SYSTEM """You are Dolphin, a helpful AI assistant. """ PARAMETER num_ctx 8192 PARAMETER stop "<|im_start|>" PARAMETER stop "<|im_end|>" ``` ## Provided files | Name | Quant method | Size | | ---- | ---- | ---- | | [dolphin-2.9-llama3-70b-Q6_K.gguf-part00000](https://huggingface.co/TrabEsrever/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q6_K.gguf-part00000) [dolphin-2.9-llama3-70b-Q6_K.gguf-part00001](https://huggingface.co/TrabEsrever/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q6_K.gguf-part00001) | Q6_K | 54G | | [dolphin-2.9-llama3-70b-Q8_0.gguf-part00000](https://huggingface.co/TrabEsrever/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q8_0.gguf-part00000) [dolphin-2.9-llama3-70b-Q8_0.gguf-part00001](https://huggingface.co/TrabEsrever/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q8_0.gguf-part00001) | Q8_0 | 70G | ## Combine multi part files into one GGUF ``` cat dolphin-2.9-llama3-70b-Q6_K.gguf-part00000 dolphin-2.9-llama3-70b-Q6_K.gguf-part00001 > dolphin-2.9-llama3-70b-Q6_K.gguf ```
{"license": "llama3", "tags": ["gguf", "llama3", "ollama", "dolphin"], "base_model": "cognitivecomputations/dolphin-2.9-llama3-70b"}
TrabEsrever/dolphin-2.9-llama3-70b-GGUF
null
[ "gguf", "llama3", "ollama", "dolphin", "base_model:cognitivecomputations/dolphin-2.9-llama3-70b", "license:llama3", "region:us" ]
null
2024-04-26T22:41:00+00:00
[]
[]
TAGS #gguf #llama3 #ollama #dolphin #base_model-cognitivecomputations/dolphin-2.9-llama3-70b #license-llama3 #region-us
Dolphin 2.9 Llama 3 70b (GGUF) ============================== Quantized and converted to GGUF for Ollama. Example Modelfile ----------------- Provided files -------------- Name: dolphin-2.9-llama3-70b-Q6\_K.gguf-part00000 dolphin-2.9-llama3-70b-Q6\_K.gguf-part00001, Quant method: Q6\_K, Size: 54G Name: dolphin-2.9-llama3-70b-Q8\_0.gguf-part00000 dolphin-2.9-llama3-70b-Q8\_0.gguf-part00001, Quant method: Q8\_0, Size: 70G Combine multi part files into one GGUF --------------------------------------
[]
[ "TAGS\n#gguf #llama3 #ollama #dolphin #base_model-cognitivecomputations/dolphin-2.9-llama3-70b #license-llama3 #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) bloom-1b1 - bnb 4bits - Model creator: https://huggingface.co/bigscience/ - Original model: https://huggingface.co/bigscience/bloom-1b1/ Original model description: --- license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zhs - zht - zu pipeline_tag: text-generation --- <h1 style='text-align: center '>BLOOM LM</h1> <h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2> <h3 style='text-align: center '>Model Card</h3> <img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Version 1.0 / 26.May.2022 ## Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Data](#training-data) 4. [Risks and Limitations](#risks-and-limitations) 5. [Evaluation](#evaluation) 6. [Recommendations](#recommendations) 7. [Glossary and Calculations](#glossary-and-calculations) 8. [More Information](#more-information) 9. [Model Card Authors](#model-card-authors) ## Model Details ### Basics *This section provides information for anyone who wants to know about the model.* <details> <summary>Click to expand</summary> <br/> **Developed by:** BigScience ([website](https://bigscience.huggingface.co)) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* **Model Type:** Transformer-based Language Model **Version:** 1.0.0 **Languages:** Multiple; see [training data](#training-data) **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license)) **Release Date Estimate:** Monday, 11.July.2022 **Send Questions to:** [email protected] **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022 **Funded by:** * The French government. * Hugging Face ([website](https://huggingface.co)). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* </details> ### Technical Specifications *This section provides information for people who work on model development.* <details> <summary>Click to expand</summary><br/> Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training. **Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)): * Decoder-only architecture * Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf)) * ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions * 1,065,314,304 parameters: * 385,351,680 embedding parameters * 24 layers, 16 attention heads * Hidden layers are 1536-dimensional * Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization)) **Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)). **Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)). * Hardware: 384 A100 80GB GPUs (48 nodes): * Additional 32 A100 80GB GPUs (4 nodes) in reserve * 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links * CPU: AMD * CPU memory: 512GB per node * GPU memory: 640GB per node * Inter-node connect: Omni-Path Architecture (OPA) * NCCL-communications network: a fully dedicated subnet * Disc IO network: shared network with other types of nodes * Software: * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed)) * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed)) * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch)) * apex ([Github link](https://github.com/NVIDIA/apex)) #### **Training** Training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11d-760M-logs) - Number of epochs: 1 - Dates: - Started 11th March, 2022 11:42am PST - Ended 5th July, 2022 - Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes) - Server training location: Île-de-France, France #### **Tokenization** The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using: - A byte-level Byte Pair Encoding (BPE) algorithm - A simple pre-tokenization rule, no normalization - A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. </details> ### Environmental Impact <details> <summary>Click to expand</summary><br/> The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. **Estimated carbon emissions:** *(Forthcoming upon completion of training.)* **Estimated electricity usage:** *(Forthcoming upon completion of training.)* </details> <p>&nbsp;</p> ## Uses *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* <details> <summary>Click to expand</summary><br/> ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### **Direct Use** - Text generation - Exploring characteristics of language generated by a language model - Examples: Cloze tests, counterfactuals, generations with reframings #### **Downstream Use** - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### **Out-of-scope Uses** Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.  The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: - Usage in biomedical domains, political and legal domains, or finance domains - Usage for evaluating or scoring individuals, such as for employment, education, or credit - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### **Misuse** Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes: - Spam generation - Disinformation and influence operations - Disparagement and defamation - Harassment and abuse - [Deception](#deception) - Unconsented impersonation and imitation - Unconsented surveillance - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license) ### Intended Users #### **Direct Users** - General Public - Researchers - Students - Educators - Engineers/developers - Non-commercial entities - Community advocates, including human and civil rights groups #### Indirect Users - Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use) - Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license) #### Others Affected (Parties Prenantes) - People and groups referred to by the LLM - People and groups exposed to outputs of, or decisions based on, the LLM - People and groups whose original work is included in the LLM </details> <p>&nbsp;</p> ## Training Data *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* <details> <summary>Click to expand</summary><br/> Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus). Training data includes: - 45 natural languages - 12 programming languages - In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.) #### **Languages** The pie chart shows the distribution of languages in training data. ![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true) The following table shows the further distribution of Niger-Congo and Indic languages in the training data. <details> <summary>Click to expand</summary><br/> | Niger Congo | Percentage | | Indic | Percentage | |----------------|------------ |------ |-----------|------------| | Chi Tumbuka | 0.00002 | | Assamese | 0.01 | | Kikuyu | 0.00004 | | Odia | 0.04 | | Bambara | 0.00004 | | Gujarati | 0.04 | | Akan | 0.00007 | | Marathi | 0.05 | | Xitsonga | 0.00007 | | Punjabi | 0.05 | | Sesotho | 0.00007 | | Kannada | 0.06 | | Chi Chewa | 0.0001 | | Nepali | 0.07 | | Setswana | 0.0002 | | Telugu | 0.09 | | Northern Sotho | 0.0002 | | Malayalam | 0.10 | | Fon | 0.0002 | | Urdu | 0.10 | | Kirundi | 0.0003 | | Tamil | 0.20 | | Wolof | 0.0004 | | Bengali | 0.50 | | Kuganda | 0.0004 | | Hindi | 0.70 | | Chi Shona | 0.001 | | Isi Zulu | 0.001 | | Igbo | 0.001 | | Xhosa | 0.001 | | Kinyarwanda | 0.003 | | Yoruba | 0.006 | | Swahili | 0.02 | </details> The following table shows the distribution of programming languages. <details> <summary>Click to expand</summary><br/> | Extension | Language | Number of files | |----------------|------------|-----------------| | java | Java | 5,407,724 | | php | PHP | 4,942,186 | | cpp | C++ | 2,503,930 | | py | Python | 2,435,072 | | js | JavaScript | 1,905,518 | | cs | C# | 1,577,347 | | rb | Ruby | 6,78,413 | | cc | C++ | 443,054 | | hpp | C++ | 391,048 | | lua | Lua | 352,317 | | go | GO | 227,763 | | ts | TypeScript | 195,254 | | C | C | 134,537 | | scala | Scala | 92,052 | | hh | C++ | 67,161 | | H | C++ | 55,899 | | tsx | TypeScript | 33,107 | | rs | Rust | 29,693 | | phpt | PHP | 9,702 | | c++ | C++ | 1,342 | | h++ | C++ | 791 | | php3 | PHP | 540 | | phps | PHP | 270 | | php5 | PHP | 166 | | php4 | PHP | 29 | </details> </details> <p>&nbsp;</p> ## Risks and Limitations *This section identifies foreseeable harms and misunderstandings.* <details> <summary>Click to expand</summary><br/> Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information as if it were factual - Generate irrelevant or repetitive outputs </details> <p>&nbsp;</p> ## Evaluation *This section describes the evaluation protocols and provides the results.* <details> <summary>Click to expand</summary><br/> ### Metrics *This section describes the different ways performance is calculated and why.* Includes: | Metric | Why chosen | |--------------------|--------------------------------------------------------------------| | [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training | | Cross Entropy [Loss](#loss) | Standard objective for language models. | And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_ ### Factors *This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* - Language, such as English or Yoruba - Domain, such as newswire or stories - Demographic characteristics, such as gender or nationality ### Results *Results are based on the [Factors](#factors) and [Metrics](#metrics).* **Train-time Evaluation:** As of 25.May.2022, 15:00 PST: - Training Loss: 2.7 - Validation Loss: 3.1 - Perplexity: 21.9 (More evaluation scores forthcoming at the end of model training.) </details> <p>&nbsp;</p> ## Recommendations *This section provides information on warnings and potential mitigations.* <details> <summary>Click to expand</summary><br/> - Indirect users should be made aware when the content they're working with is created by the LLM. - Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary. - Models pretrained with the LLM should include an updated Model Card. - Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. </details> <p>&nbsp;</p> ## Glossary and Calculations *This section defines common terms and how metrics are calculated.* <details> <summary>Click to expand</summary><br/> - <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. - <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. - <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/). - <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf). - <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf). - <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm). - <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf)) - <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. </details> <p>&nbsp;</p> ## More Information <details> <summary>Click to expand</summary><br/> ### Dataset Creation Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md ### Initial Results Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book </details> <p>&nbsp;</p> ## Model Card Authors *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
{}
RichardErkhov/bigscience_-_bloom-1b1-4bits
null
[ "transformers", "safetensors", "bloom", "text-generation", "arxiv:1909.08053", "arxiv:2110.02861", "arxiv:2108.12409", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T22:42:49+00:00
[ "1909.08053", "2110.02861", "2108.12409" ]
[]
TAGS #transformers #safetensors #bloom #text-generation #arxiv-1909.08053 #arxiv-2110.02861 #arxiv-2108.12409 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models bloom-1b1 - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: bigscience-bloom-rail-1.0 language: * ak * ar * as * bm * bn * ca * code * en * es * eu * fon * fr * gu * hi * id * ig * ki * kn * lg * ln * ml * mr * ne * nso * ny * or * pa * pt * rn * rw * sn * st * sw * ta * te * tn * ts * tum * tw * ur * vi * wo * xh * yo * zh * zhs * zht * zu pipeline\_tag: text-generation --- BLOOM LM ======== *BigScience Large Open-science Open-access Multilingual Language Model* ----------------------------------------------------------------------- ### Model Card ![](URL alt=) Version 1.0 / 26.May.2022 Table of Contents ----------------- 1. Model Details 2. Uses 3. Training Data 4. Risks and Limitations 5. Evaluation 6. Recommendations 7. Glossary and Calculations 8. More Information 9. Model Card Authors Model Details ------------- ### Basics *This section provides information for anyone who wants to know about the model.* Click to expand Developed by: BigScience (website) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* Model Type: Transformer-based Language Model Version: 1.0.0 Languages: Multiple; see training data License: RAIL License v1.0 (link) Release Date Estimate: Monday, 11.July.2022 Send Questions to: bigscience-contact@URL Cite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022 Funded by: * The French government. * Hugging Face (website). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* ### Technical Specifications *This section provides information for people who work on model development.* Click to expand Please see the BLOOM training README for full details on replicating training. Model Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code): * Decoder-only architecture * Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper) * ALiBI positional encodings (see paper), with GeLU activation functions * 1,065,314,304 parameters: + 385,351,680 embedding parameters + 24 layers, 16 attention heads + Hidden layers are 1536-dimensional + Sequence length of 2048 tokens used (see BLOOM tokenizer, tokenizer description) Objective Function: Cross Entropy with mean reduction (see API documentation). Compute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement). * Hardware: 384 A100 80GB GPUs (48 nodes): + Additional 32 A100 80GB GPUs (4 nodes) in reserve + 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links + CPU: AMD + CPU memory: 512GB per node + GPU memory: 640GB per node + Inter-node connect: Omni-Path Architecture (OPA) + NCCL-communications network: a fully dedicated subnet + Disc IO network: shared network with other types of nodes * Software: + Megatron-DeepSpeed (Github link) + DeepSpeed (Github link) + PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link) + apex (Github link) #### Training Training logs: Tensorboard link * Number of epochs: 1 * Dates: + Started 11th March, 2022 11:42am PST + Ended 5th July, 2022 * Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes) * Server training location: Île-de-France, France #### Tokenization The BLOOM tokenizer (link) is a learned subword tokenizer trained using: * A byte-level Byte Pair Encoding (BPE) algorithm * A simple pre-tokenization rule, no normalization * A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. ### Environmental Impact Click to expand The training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. Estimated carbon emissions: *(Forthcoming upon completion of training.)* Estimated electricity usage: *(Forthcoming upon completion of training.)*   Uses ---- *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* Click to expand ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### Direct Use * Text generation * Exploring characteristics of language generated by a language model + Examples: Cloze tests, counterfactuals, generations with reframings #### Downstream Use * Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### Out-of-scope Uses Using the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: * Usage in biomedical domains, political and legal domains, or finance domains * Usage for evaluating or scoring individuals, such as for employment, education, or credit * Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### Misuse Intentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes: * Spam generation * Disinformation and influence operations * Disparagement and defamation * Harassment and abuse * Deception * Unconsented impersonation and imitation * Unconsented surveillance * Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions ### Intended Users #### Direct Users * General Public * Researchers * Students * Educators * Engineers/developers * Non-commercial entities * Community advocates, including human and civil rights groups #### Indirect Users * Users of derivatives created by Direct Users, such as those using software with an intended use * Users of Derivatives of the Model, as described in the License #### Others Affected (Parties Prenantes) * People and groups referred to by the LLM * People and groups exposed to outputs of, or decisions based on, the LLM * People and groups whose original work is included in the LLM   Training Data ------------- *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* Click to expand Details for each dataset are provided in individual Data Cards. Training data includes: * 45 natural languages * 12 programming languages * In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.) #### Languages The pie chart shows the distribution of languages in training data. !pie chart showing the distribution of languages in training data The following table shows the further distribution of Niger-Congo and Indic languages in the training data. Click to expand The following table shows the distribution of programming languages. Click to expand Extension: java, Language: Java, Number of files: 5,407,724 Extension: php, Language: PHP, Number of files: 4,942,186 Extension: cpp, Language: C++, Number of files: 2,503,930 Extension: py, Language: Python, Number of files: 2,435,072 Extension: js, Language: JavaScript, Number of files: 1,905,518 Extension: cs, Language: C#, Number of files: 1,577,347 Extension: rb, Language: Ruby, Number of files: 6,78,413 Extension: cc, Language: C++, Number of files: 443,054 Extension: hpp, Language: C++, Number of files: 391,048 Extension: lua, Language: Lua, Number of files: 352,317 Extension: go, Language: GO, Number of files: 227,763 Extension: ts, Language: TypeScript, Number of files: 195,254 Extension: C, Language: C, Number of files: 134,537 Extension: scala, Language: Scala, Number of files: 92,052 Extension: hh, Language: C++, Number of files: 67,161 Extension: H, Language: C++, Number of files: 55,899 Extension: tsx, Language: TypeScript, Number of files: 33,107 Extension: rs, Language: Rust, Number of files: 29,693 Extension: phpt, Language: PHP, Number of files: 9,702 Extension: c++, Language: C++, Number of files: 1,342 Extension: h++, Language: C++, Number of files: 791 Extension: php3, Language: PHP, Number of files: 540 Extension: phps, Language: PHP, Number of files: 270 Extension: php5, Language: PHP, Number of files: 166 Extension: php4, Language: PHP, Number of files: 29   Risks and Limitations --------------------- *This section identifies foreseeable harms and misunderstandings.* Click to expand Model may: * Overrepresent some viewpoints and underrepresent others * Contain stereotypes * Contain personal information * Generate: + Hateful, abusive, or violent language + Discriminatory or prejudicial language + Content that may not be appropriate for all settings, including sexual content * Make errors, including producing incorrect information as if it were factual * Generate irrelevant or repetitive outputs   Evaluation ---------- *This section describes the evaluation protocols and provides the results.* Click to expand ### Metrics *This section describes the different ways performance is calculated and why.* Includes: And multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)* ### Factors *This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* * Language, such as English or Yoruba * Domain, such as newswire or stories * Demographic characteristics, such as gender or nationality ### Results *Results are based on the Factors and Metrics.* Train-time Evaluation: As of 25.May.2022, 15:00 PST: * Training Loss: 2.7 * Validation Loss: 3.1 * Perplexity: 21.9 (More evaluation scores forthcoming at the end of model training.)   Recommendations --------------- *This section provides information on warnings and potential mitigations.* Click to expand * Indirect users should be made aware when the content they're working with is created by the LLM. * Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. * Models pretrained with the LLM should include an updated Model Card. * Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.   Glossary and Calculations ------------------------- *This section defines common terms and how metrics are calculated.* Click to expand * Loss: A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. * Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. * High-stakes settings: Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed Artificial Intelligence (AI) Act. * Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act. * Human rights: Includes those rights defined in the Universal Declaration of Human Rights. * Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as "personal data" in the European Union's General Data Protection Regulation; and "personal information" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law. * Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1) * Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.   More Information ---------------- Click to expand ### Dataset Creation Blog post detailing the design choices during the dataset creation: URL ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL More details on the architecture/optimizer: URL Blog post on the hardware/engineering side: URL Details on the distributed setup used for the training: URL Tensorboard updated during the training: URL Insights on how to approach training, negative results: URL Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL ### Initial Results Initial prompting experiments using interim checkpoints: URL   Model Card Authors ------------------ *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
[ "### Model Card\n\n\n![](URL alt=)\nVersion 1.0 / 26.May.2022\n\n\nTable of Contents\n-----------------\n\n\n1. Model Details\n2. Uses\n3. Training Data\n4. Risks and Limitations\n5. Evaluation\n6. Recommendations\n7. Glossary and Calculations\n8. More Information\n9. Model Card Authors\n\n\nModel Details\n-------------", "### Basics\n\n\n*This section provides information for anyone who wants to know about the model.*\n\n\n\nClick to expand \n\nDeveloped by: BigScience (website)\n\n\n* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*\n\n\nModel Type: Transformer-based Language Model\n\n\nVersion: 1.0.0\n\n\nLanguages: Multiple; see training data\n\n\nLicense: RAIL License v1.0 (link)\n\n\nRelease Date Estimate: Monday, 11.July.2022\n\n\nSend Questions to: bigscience-contact@URL\n\n\nCite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022\n\n\nFunded by:\n\n\n* The French government.\n* Hugging Face (website).\n* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*", "### Technical Specifications\n\n\n*This section provides information for people who work on model development.*\n\n\n\nClick to expand \n\nPlease see the BLOOM training README for full details on replicating training.\n\n\nModel Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code):\n\n\n* Decoder-only architecture\n* Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper)\n* ALiBI positional encodings (see paper), with GeLU activation functions\n* 1,065,314,304 parameters:\n\n\n\t+ 385,351,680 embedding parameters\n\t+ 24 layers, 16 attention heads\n\t+ Hidden layers are 1536-dimensional\n\t+ Sequence length of 2048 tokens used (see BLOOM tokenizer, tokenizer description)\n\n\nObjective Function: Cross Entropy with mean reduction (see API documentation).\n\n\nCompute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement).\n\n\n* Hardware: 384 A100 80GB GPUs (48 nodes):\n\n\n\t+ Additional 32 A100 80GB GPUs (4 nodes) in reserve\n\t+ 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links\n\t+ CPU: AMD\n\t+ CPU memory: 512GB per node\n\t+ GPU memory: 640GB per node\n\t+ Inter-node connect: Omni-Path Architecture (OPA)\n\t+ NCCL-communications network: a fully dedicated subnet\n\t+ Disc IO network: shared network with other types of nodes\n* Software:\n\n\n\t+ Megatron-DeepSpeed (Github link)\n\t+ DeepSpeed (Github link)\n\t+ PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link)\n\t+ apex (Github link)", "#### Training\n\n\nTraining logs: Tensorboard link\n\n\n* Number of epochs: 1\n* Dates:\n\n\n\t+ Started 11th March, 2022 11:42am PST\n\t+ Ended 5th July, 2022\n* Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)\n* Server training location: Île-de-France, France", "#### Tokenization\n\n\nThe BLOOM tokenizer (link) is a learned subword tokenizer trained using:\n\n\n* A byte-level Byte Pair Encoding (BPE) algorithm\n* A simple pre-tokenization rule, no normalization\n* A vocabulary size of 250,680\n\n\nIt was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.", "### Environmental Impact\n\n\n\nClick to expand \n\nThe training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.\n\n\nEstimated carbon emissions: *(Forthcoming upon completion of training.)*\n\n\nEstimated electricity usage: *(Forthcoming upon completion of training.)*\n\n\n\n \n\n\nUses\n----\n\n\n*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.\nIt provides information for anyone considering using the model or who is affected by the model.*\n\n\n\nClick to expand", "### Intended Use\n\n\nThis model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.", "#### Direct Use\n\n\n* Text generation\n* Exploring characteristics of language generated by a language model\n\n\n\t+ Examples: Cloze tests, counterfactuals, generations with reframings", "#### Downstream Use\n\n\n* Tasks that leverage language models include: Information Extraction, Question Answering, Summarization", "### Misuse and Out-of-scope Use\n\n\n*This section addresses what users ought not do with the model.*\n\n\nSee the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.", "#### Out-of-scope Uses\n\n\nUsing the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.", "##### Out-of-scope Uses Include:\n\n\n* Usage in biomedical domains, political and legal domains, or finance domains\n* Usage for evaluating or scoring individuals, such as for employment, education, or credit\n* Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct", "#### Misuse\n\n\nIntentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:\n\n\n* Spam generation\n* Disinformation and influence operations\n* Disparagement and defamation\n* Harassment and abuse\n* Deception\n* Unconsented impersonation and imitation\n* Unconsented surveillance\n* Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions", "### Intended Users", "#### Direct Users\n\n\n* General Public\n* Researchers\n* Students\n* Educators\n* Engineers/developers\n* Non-commercial entities\n* Community advocates, including human and civil rights groups", "#### Indirect Users\n\n\n* Users of derivatives created by Direct Users, such as those using software with an intended use\n* Users of Derivatives of the Model, as described in the License", "#### Others Affected (Parties Prenantes)\n\n\n* People and groups referred to by the LLM\n* People and groups exposed to outputs of, or decisions based on, the LLM\n* People and groups whose original work is included in the LLM\n\n\n\n \n\n\nTraining Data\n-------------\n\n\n*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*\n\n\n\nClick to expand \n\nDetails for each dataset are provided in individual Data Cards.\n\n\nTraining data includes:\n\n\n* 45 natural languages\n* 12 programming languages\n* In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)", "#### Languages\n\n\nThe pie chart shows the distribution of languages in training data.\n\n\n!pie chart showing the distribution of languages in training data\n\n\nThe following table shows the further distribution of Niger-Congo and Indic languages in the training data.\n\n\n\nClick to expand \n\n\n\nThe following table shows the distribution of programming languages.\n\n\n\nClick to expand \n\nExtension: java, Language: Java, Number of files: 5,407,724\nExtension: php, Language: PHP, Number of files: 4,942,186\nExtension: cpp, Language: C++, Number of files: 2,503,930\nExtension: py, Language: Python, Number of files: 2,435,072\nExtension: js, Language: JavaScript, Number of files: 1,905,518\nExtension: cs, Language: C#, Number of files: 1,577,347\nExtension: rb, Language: Ruby, Number of files: 6,78,413\nExtension: cc, Language: C++, Number of files: 443,054\nExtension: hpp, Language: C++, Number of files: 391,048\nExtension: lua, Language: Lua, Number of files: 352,317\nExtension: go, Language: GO, Number of files: 227,763\nExtension: ts, Language: TypeScript, Number of files: 195,254\nExtension: C, Language: C, Number of files: 134,537\nExtension: scala, Language: Scala, Number of files: 92,052\nExtension: hh, Language: C++, Number of files: 67,161\nExtension: H, Language: C++, Number of files: 55,899\nExtension: tsx, Language: TypeScript, Number of files: 33,107\nExtension: rs, Language: Rust, Number of files: 29,693\nExtension: phpt, Language: PHP, Number of files: 9,702\nExtension: c++, Language: C++, Number of files: 1,342\nExtension: h++, Language: C++, Number of files: 791\nExtension: php3, Language: PHP, Number of files: 540\nExtension: phps, Language: PHP, Number of files: 270\nExtension: php5, Language: PHP, Number of files: 166\nExtension: php4, Language: PHP, Number of files: 29\n\n\n\n\n \n\n\nRisks and Limitations\n---------------------\n\n\n*This section identifies foreseeable harms and misunderstandings.*\n\n\n\nClick to expand \n\nModel may:\n\n\n* Overrepresent some viewpoints and underrepresent others\n* Contain stereotypes\n* Contain personal information\n* Generate:\n\n\n\t+ Hateful, abusive, or violent language\n\t+ Discriminatory or prejudicial language\n\t+ Content that may not be appropriate for all settings, including sexual content\n* Make errors, including producing incorrect information as if it were factual\n* Generate irrelevant or repetitive outputs\n\n\n\n \n\n\nEvaluation\n----------\n\n\n*This section describes the evaluation protocols and provides the results.*\n\n\n\nClick to expand", "### Metrics\n\n\n*This section describes the different ways performance is calculated and why.*\n\n\nIncludes:\n\n\n\nAnd multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)*", "### Factors\n\n\n*This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*\n\n\n* Language, such as English or Yoruba\n* Domain, such as newswire or stories\n* Demographic characteristics, such as gender or nationality", "### Results\n\n\n*Results are based on the Factors and Metrics.*\n\n\nTrain-time Evaluation:\n\n\nAs of 25.May.2022, 15:00 PST:\n\n\n* Training Loss: 2.7\n* Validation Loss: 3.1\n* Perplexity: 21.9\n\n\n(More evaluation scores forthcoming at the end of model training.)\n\n\n\n \n\n\nRecommendations\n---------------\n\n\n*This section provides information on warnings and potential mitigations.*\n\n\n\nClick to expand \n\n* Indirect users should be made aware when the content they're working with is created by the LLM.\n* Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.\n* Models pretrained with the LLM should include an updated Model Card.\n* Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.\n\n\n\n \n\n\nGlossary and Calculations\n-------------------------\n\n\n*This section defines common terms and how metrics are calculated.*\n\n\n\nClick to expand \n\n* Loss: A calculation of the difference between what the model has learned and what the data shows (\"groundtruth\"). The lower the loss, the better. The training process aims to minimize the loss.\n* Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.\n* High-stakes settings: Such as those identified as \"high-risk AI systems\" and \"unacceptable risk AI systems\" in the European Union's proposed Artificial Intelligence (AI) Act.\n* Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act.\n* Human rights: Includes those rights defined in the Universal Declaration of Human Rights.\n* Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as \"personal data\" in the European Union's General Data Protection Regulation; and \"personal information\" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law.\n* Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1)\n* Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.\n\n\n\n \n\n\nMore Information\n----------------\n\n\n\nClick to expand", "### Dataset Creation\n\n\nBlog post detailing the design choices during the dataset creation: URL", "### Technical Specifications\n\n\nBlog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL\n\n\nMore details on the architecture/optimizer: URL\n\n\nBlog post on the hardware/engineering side: URL\n\n\nDetails on the distributed setup used for the training: URL\n\n\nTensorboard updated during the training: URL\n\n\nInsights on how to approach training, negative results: URL\n\n\nDetails on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL", "### Initial Results\n\n\nInitial prompting experiments using interim checkpoints: URL\n\n\n\n \n\n\nModel Card Authors\n------------------\n\n\n*Ordered roughly chronologically and by amount of time spent.*\n\n\nMargaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff" ]
[ "TAGS\n#transformers #safetensors #bloom #text-generation #arxiv-1909.08053 #arxiv-2110.02861 #arxiv-2108.12409 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "### Model Card\n\n\n![](URL alt=)\nVersion 1.0 / 26.May.2022\n\n\nTable of Contents\n-----------------\n\n\n1. Model Details\n2. Uses\n3. Training Data\n4. Risks and Limitations\n5. Evaluation\n6. Recommendations\n7. Glossary and Calculations\n8. More Information\n9. Model Card Authors\n\n\nModel Details\n-------------", "### Basics\n\n\n*This section provides information for anyone who wants to know about the model.*\n\n\n\nClick to expand \n\nDeveloped by: BigScience (website)\n\n\n* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*\n\n\nModel Type: Transformer-based Language Model\n\n\nVersion: 1.0.0\n\n\nLanguages: Multiple; see training data\n\n\nLicense: RAIL License v1.0 (link)\n\n\nRelease Date Estimate: Monday, 11.July.2022\n\n\nSend Questions to: bigscience-contact@URL\n\n\nCite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022\n\n\nFunded by:\n\n\n* The French government.\n* Hugging Face (website).\n* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*", "### Technical Specifications\n\n\n*This section provides information for people who work on model development.*\n\n\n\nClick to expand \n\nPlease see the BLOOM training README for full details on replicating training.\n\n\nModel Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code):\n\n\n* Decoder-only architecture\n* Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper)\n* ALiBI positional encodings (see paper), with GeLU activation functions\n* 1,065,314,304 parameters:\n\n\n\t+ 385,351,680 embedding parameters\n\t+ 24 layers, 16 attention heads\n\t+ Hidden layers are 1536-dimensional\n\t+ Sequence length of 2048 tokens used (see BLOOM tokenizer, tokenizer description)\n\n\nObjective Function: Cross Entropy with mean reduction (see API documentation).\n\n\nCompute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement).\n\n\n* Hardware: 384 A100 80GB GPUs (48 nodes):\n\n\n\t+ Additional 32 A100 80GB GPUs (4 nodes) in reserve\n\t+ 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links\n\t+ CPU: AMD\n\t+ CPU memory: 512GB per node\n\t+ GPU memory: 640GB per node\n\t+ Inter-node connect: Omni-Path Architecture (OPA)\n\t+ NCCL-communications network: a fully dedicated subnet\n\t+ Disc IO network: shared network with other types of nodes\n* Software:\n\n\n\t+ Megatron-DeepSpeed (Github link)\n\t+ DeepSpeed (Github link)\n\t+ PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link)\n\t+ apex (Github link)", "#### Training\n\n\nTraining logs: Tensorboard link\n\n\n* Number of epochs: 1\n* Dates:\n\n\n\t+ Started 11th March, 2022 11:42am PST\n\t+ Ended 5th July, 2022\n* Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)\n* Server training location: Île-de-France, France", "#### Tokenization\n\n\nThe BLOOM tokenizer (link) is a learned subword tokenizer trained using:\n\n\n* A byte-level Byte Pair Encoding (BPE) algorithm\n* A simple pre-tokenization rule, no normalization\n* A vocabulary size of 250,680\n\n\nIt was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.", "### Environmental Impact\n\n\n\nClick to expand \n\nThe training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.\n\n\nEstimated carbon emissions: *(Forthcoming upon completion of training.)*\n\n\nEstimated electricity usage: *(Forthcoming upon completion of training.)*\n\n\n\n \n\n\nUses\n----\n\n\n*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.\nIt provides information for anyone considering using the model or who is affected by the model.*\n\n\n\nClick to expand", "### Intended Use\n\n\nThis model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.", "#### Direct Use\n\n\n* Text generation\n* Exploring characteristics of language generated by a language model\n\n\n\t+ Examples: Cloze tests, counterfactuals, generations with reframings", "#### Downstream Use\n\n\n* Tasks that leverage language models include: Information Extraction, Question Answering, Summarization", "### Misuse and Out-of-scope Use\n\n\n*This section addresses what users ought not do with the model.*\n\n\nSee the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.", "#### Out-of-scope Uses\n\n\nUsing the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.", "##### Out-of-scope Uses Include:\n\n\n* Usage in biomedical domains, political and legal domains, or finance domains\n* Usage for evaluating or scoring individuals, such as for employment, education, or credit\n* Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct", "#### Misuse\n\n\nIntentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:\n\n\n* Spam generation\n* Disinformation and influence operations\n* Disparagement and defamation\n* Harassment and abuse\n* Deception\n* Unconsented impersonation and imitation\n* Unconsented surveillance\n* Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions", "### Intended Users", "#### Direct Users\n\n\n* General Public\n* Researchers\n* Students\n* Educators\n* Engineers/developers\n* Non-commercial entities\n* Community advocates, including human and civil rights groups", "#### Indirect Users\n\n\n* Users of derivatives created by Direct Users, such as those using software with an intended use\n* Users of Derivatives of the Model, as described in the License", "#### Others Affected (Parties Prenantes)\n\n\n* People and groups referred to by the LLM\n* People and groups exposed to outputs of, or decisions based on, the LLM\n* People and groups whose original work is included in the LLM\n\n\n\n \n\n\nTraining Data\n-------------\n\n\n*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*\n\n\n\nClick to expand \n\nDetails for each dataset are provided in individual Data Cards.\n\n\nTraining data includes:\n\n\n* 45 natural languages\n* 12 programming languages\n* In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)", "#### Languages\n\n\nThe pie chart shows the distribution of languages in training data.\n\n\n!pie chart showing the distribution of languages in training data\n\n\nThe following table shows the further distribution of Niger-Congo and Indic languages in the training data.\n\n\n\nClick to expand \n\n\n\nThe following table shows the distribution of programming languages.\n\n\n\nClick to expand \n\nExtension: java, Language: Java, Number of files: 5,407,724\nExtension: php, Language: PHP, Number of files: 4,942,186\nExtension: cpp, Language: C++, Number of files: 2,503,930\nExtension: py, Language: Python, Number of files: 2,435,072\nExtension: js, Language: JavaScript, Number of files: 1,905,518\nExtension: cs, Language: C#, Number of files: 1,577,347\nExtension: rb, Language: Ruby, Number of files: 6,78,413\nExtension: cc, Language: C++, Number of files: 443,054\nExtension: hpp, Language: C++, Number of files: 391,048\nExtension: lua, Language: Lua, Number of files: 352,317\nExtension: go, Language: GO, Number of files: 227,763\nExtension: ts, Language: TypeScript, Number of files: 195,254\nExtension: C, Language: C, Number of files: 134,537\nExtension: scala, Language: Scala, Number of files: 92,052\nExtension: hh, Language: C++, Number of files: 67,161\nExtension: H, Language: C++, Number of files: 55,899\nExtension: tsx, Language: TypeScript, Number of files: 33,107\nExtension: rs, Language: Rust, Number of files: 29,693\nExtension: phpt, Language: PHP, Number of files: 9,702\nExtension: c++, Language: C++, Number of files: 1,342\nExtension: h++, Language: C++, Number of files: 791\nExtension: php3, Language: PHP, Number of files: 540\nExtension: phps, Language: PHP, Number of files: 270\nExtension: php5, Language: PHP, Number of files: 166\nExtension: php4, Language: PHP, Number of files: 29\n\n\n\n\n \n\n\nRisks and Limitations\n---------------------\n\n\n*This section identifies foreseeable harms and misunderstandings.*\n\n\n\nClick to expand \n\nModel may:\n\n\n* Overrepresent some viewpoints and underrepresent others\n* Contain stereotypes\n* Contain personal information\n* Generate:\n\n\n\t+ Hateful, abusive, or violent language\n\t+ Discriminatory or prejudicial language\n\t+ Content that may not be appropriate for all settings, including sexual content\n* Make errors, including producing incorrect information as if it were factual\n* Generate irrelevant or repetitive outputs\n\n\n\n \n\n\nEvaluation\n----------\n\n\n*This section describes the evaluation protocols and provides the results.*\n\n\n\nClick to expand", "### Metrics\n\n\n*This section describes the different ways performance is calculated and why.*\n\n\nIncludes:\n\n\n\nAnd multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)*", "### Factors\n\n\n*This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*\n\n\n* Language, such as English or Yoruba\n* Domain, such as newswire or stories\n* Demographic characteristics, such as gender or nationality", "### Results\n\n\n*Results are based on the Factors and Metrics.*\n\n\nTrain-time Evaluation:\n\n\nAs of 25.May.2022, 15:00 PST:\n\n\n* Training Loss: 2.7\n* Validation Loss: 3.1\n* Perplexity: 21.9\n\n\n(More evaluation scores forthcoming at the end of model training.)\n\n\n\n \n\n\nRecommendations\n---------------\n\n\n*This section provides information on warnings and potential mitigations.*\n\n\n\nClick to expand \n\n* Indirect users should be made aware when the content they're working with is created by the LLM.\n* Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.\n* Models pretrained with the LLM should include an updated Model Card.\n* Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.\n\n\n\n \n\n\nGlossary and Calculations\n-------------------------\n\n\n*This section defines common terms and how metrics are calculated.*\n\n\n\nClick to expand \n\n* Loss: A calculation of the difference between what the model has learned and what the data shows (\"groundtruth\"). The lower the loss, the better. The training process aims to minimize the loss.\n* Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.\n* High-stakes settings: Such as those identified as \"high-risk AI systems\" and \"unacceptable risk AI systems\" in the European Union's proposed Artificial Intelligence (AI) Act.\n* Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act.\n* Human rights: Includes those rights defined in the Universal Declaration of Human Rights.\n* Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as \"personal data\" in the European Union's General Data Protection Regulation; and \"personal information\" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law.\n* Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1)\n* Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.\n\n\n\n \n\n\nMore Information\n----------------\n\n\n\nClick to expand", "### Dataset Creation\n\n\nBlog post detailing the design choices during the dataset creation: URL", "### Technical Specifications\n\n\nBlog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL\n\n\nMore details on the architecture/optimizer: URL\n\n\nBlog post on the hardware/engineering side: URL\n\n\nDetails on the distributed setup used for the training: URL\n\n\nTensorboard updated during the training: URL\n\n\nInsights on how to approach training, negative results: URL\n\n\nDetails on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL", "### Initial Results\n\n\nInitial prompting experiments using interim checkpoints: URL\n\n\n\n \n\n\nModel Card Authors\n------------------\n\n\n*Ordered roughly chronologically and by amount of time spent.*\n\n\nMargaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) bloom-1b1 - bnb 8bits - Model creator: https://huggingface.co/bigscience/ - Original model: https://huggingface.co/bigscience/bloom-1b1/ Original model description: --- license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zhs - zht - zu pipeline_tag: text-generation --- <h1 style='text-align: center '>BLOOM LM</h1> <h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2> <h3 style='text-align: center '>Model Card</h3> <img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Version 1.0 / 26.May.2022 ## Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Data](#training-data) 4. [Risks and Limitations](#risks-and-limitations) 5. [Evaluation](#evaluation) 6. [Recommendations](#recommendations) 7. [Glossary and Calculations](#glossary-and-calculations) 8. [More Information](#more-information) 9. [Model Card Authors](#model-card-authors) ## Model Details ### Basics *This section provides information for anyone who wants to know about the model.* <details> <summary>Click to expand</summary> <br/> **Developed by:** BigScience ([website](https://bigscience.huggingface.co)) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* **Model Type:** Transformer-based Language Model **Version:** 1.0.0 **Languages:** Multiple; see [training data](#training-data) **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license)) **Release Date Estimate:** Monday, 11.July.2022 **Send Questions to:** [email protected] **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022 **Funded by:** * The French government. * Hugging Face ([website](https://huggingface.co)). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* </details> ### Technical Specifications *This section provides information for people who work on model development.* <details> <summary>Click to expand</summary><br/> Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training. **Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)): * Decoder-only architecture * Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf)) * ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions * 1,065,314,304 parameters: * 385,351,680 embedding parameters * 24 layers, 16 attention heads * Hidden layers are 1536-dimensional * Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization)) **Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)). **Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)). * Hardware: 384 A100 80GB GPUs (48 nodes): * Additional 32 A100 80GB GPUs (4 nodes) in reserve * 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links * CPU: AMD * CPU memory: 512GB per node * GPU memory: 640GB per node * Inter-node connect: Omni-Path Architecture (OPA) * NCCL-communications network: a fully dedicated subnet * Disc IO network: shared network with other types of nodes * Software: * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed)) * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed)) * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch)) * apex ([Github link](https://github.com/NVIDIA/apex)) #### **Training** Training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11d-760M-logs) - Number of epochs: 1 - Dates: - Started 11th March, 2022 11:42am PST - Ended 5th July, 2022 - Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes) - Server training location: Île-de-France, France #### **Tokenization** The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using: - A byte-level Byte Pair Encoding (BPE) algorithm - A simple pre-tokenization rule, no normalization - A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. </details> ### Environmental Impact <details> <summary>Click to expand</summary><br/> The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. **Estimated carbon emissions:** *(Forthcoming upon completion of training.)* **Estimated electricity usage:** *(Forthcoming upon completion of training.)* </details> <p>&nbsp;</p> ## Uses *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* <details> <summary>Click to expand</summary><br/> ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### **Direct Use** - Text generation - Exploring characteristics of language generated by a language model - Examples: Cloze tests, counterfactuals, generations with reframings #### **Downstream Use** - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### **Out-of-scope Uses** Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.  The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: - Usage in biomedical domains, political and legal domains, or finance domains - Usage for evaluating or scoring individuals, such as for employment, education, or credit - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### **Misuse** Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes: - Spam generation - Disinformation and influence operations - Disparagement and defamation - Harassment and abuse - [Deception](#deception) - Unconsented impersonation and imitation - Unconsented surveillance - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license) ### Intended Users #### **Direct Users** - General Public - Researchers - Students - Educators - Engineers/developers - Non-commercial entities - Community advocates, including human and civil rights groups #### Indirect Users - Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use) - Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license) #### Others Affected (Parties Prenantes) - People and groups referred to by the LLM - People and groups exposed to outputs of, or decisions based on, the LLM - People and groups whose original work is included in the LLM </details> <p>&nbsp;</p> ## Training Data *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* <details> <summary>Click to expand</summary><br/> Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus). Training data includes: - 45 natural languages - 12 programming languages - In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.) #### **Languages** The pie chart shows the distribution of languages in training data. ![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true) The following table shows the further distribution of Niger-Congo and Indic languages in the training data. <details> <summary>Click to expand</summary><br/> | Niger Congo | Percentage | | Indic | Percentage | |----------------|------------ |------ |-----------|------------| | Chi Tumbuka | 0.00002 | | Assamese | 0.01 | | Kikuyu | 0.00004 | | Odia | 0.04 | | Bambara | 0.00004 | | Gujarati | 0.04 | | Akan | 0.00007 | | Marathi | 0.05 | | Xitsonga | 0.00007 | | Punjabi | 0.05 | | Sesotho | 0.00007 | | Kannada | 0.06 | | Chi Chewa | 0.0001 | | Nepali | 0.07 | | Setswana | 0.0002 | | Telugu | 0.09 | | Northern Sotho | 0.0002 | | Malayalam | 0.10 | | Fon | 0.0002 | | Urdu | 0.10 | | Kirundi | 0.0003 | | Tamil | 0.20 | | Wolof | 0.0004 | | Bengali | 0.50 | | Kuganda | 0.0004 | | Hindi | 0.70 | | Chi Shona | 0.001 | | Isi Zulu | 0.001 | | Igbo | 0.001 | | Xhosa | 0.001 | | Kinyarwanda | 0.003 | | Yoruba | 0.006 | | Swahili | 0.02 | </details> The following table shows the distribution of programming languages. <details> <summary>Click to expand</summary><br/> | Extension | Language | Number of files | |----------------|------------|-----------------| | java | Java | 5,407,724 | | php | PHP | 4,942,186 | | cpp | C++ | 2,503,930 | | py | Python | 2,435,072 | | js | JavaScript | 1,905,518 | | cs | C# | 1,577,347 | | rb | Ruby | 6,78,413 | | cc | C++ | 443,054 | | hpp | C++ | 391,048 | | lua | Lua | 352,317 | | go | GO | 227,763 | | ts | TypeScript | 195,254 | | C | C | 134,537 | | scala | Scala | 92,052 | | hh | C++ | 67,161 | | H | C++ | 55,899 | | tsx | TypeScript | 33,107 | | rs | Rust | 29,693 | | phpt | PHP | 9,702 | | c++ | C++ | 1,342 | | h++ | C++ | 791 | | php3 | PHP | 540 | | phps | PHP | 270 | | php5 | PHP | 166 | | php4 | PHP | 29 | </details> </details> <p>&nbsp;</p> ## Risks and Limitations *This section identifies foreseeable harms and misunderstandings.* <details> <summary>Click to expand</summary><br/> Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information as if it were factual - Generate irrelevant or repetitive outputs </details> <p>&nbsp;</p> ## Evaluation *This section describes the evaluation protocols and provides the results.* <details> <summary>Click to expand</summary><br/> ### Metrics *This section describes the different ways performance is calculated and why.* Includes: | Metric | Why chosen | |--------------------|--------------------------------------------------------------------| | [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training | | Cross Entropy [Loss](#loss) | Standard objective for language models. | And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_ ### Factors *This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* - Language, such as English or Yoruba - Domain, such as newswire or stories - Demographic characteristics, such as gender or nationality ### Results *Results are based on the [Factors](#factors) and [Metrics](#metrics).* **Train-time Evaluation:** As of 25.May.2022, 15:00 PST: - Training Loss: 2.7 - Validation Loss: 3.1 - Perplexity: 21.9 (More evaluation scores forthcoming at the end of model training.) </details> <p>&nbsp;</p> ## Recommendations *This section provides information on warnings and potential mitigations.* <details> <summary>Click to expand</summary><br/> - Indirect users should be made aware when the content they're working with is created by the LLM. - Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary. - Models pretrained with the LLM should include an updated Model Card. - Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. </details> <p>&nbsp;</p> ## Glossary and Calculations *This section defines common terms and how metrics are calculated.* <details> <summary>Click to expand</summary><br/> - <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. - <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. - <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/). - <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf). - <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf). - <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm). - <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf)) - <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. </details> <p>&nbsp;</p> ## More Information <details> <summary>Click to expand</summary><br/> ### Dataset Creation Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md ### Initial Results Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book </details> <p>&nbsp;</p> ## Model Card Authors *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
{}
RichardErkhov/bigscience_-_bloom-1b1-8bits
null
[ "transformers", "safetensors", "bloom", "text-generation", "arxiv:1909.08053", "arxiv:2110.02861", "arxiv:2108.12409", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-26T22:44:10+00:00
[ "1909.08053", "2110.02861", "2108.12409" ]
[]
TAGS #transformers #safetensors #bloom #text-generation #arxiv-1909.08053 #arxiv-2110.02861 #arxiv-2108.12409 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models bloom-1b1 - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: bigscience-bloom-rail-1.0 language: * ak * ar * as * bm * bn * ca * code * en * es * eu * fon * fr * gu * hi * id * ig * ki * kn * lg * ln * ml * mr * ne * nso * ny * or * pa * pt * rn * rw * sn * st * sw * ta * te * tn * ts * tum * tw * ur * vi * wo * xh * yo * zh * zhs * zht * zu pipeline\_tag: text-generation --- BLOOM LM ======== *BigScience Large Open-science Open-access Multilingual Language Model* ----------------------------------------------------------------------- ### Model Card ![](URL alt=) Version 1.0 / 26.May.2022 Table of Contents ----------------- 1. Model Details 2. Uses 3. Training Data 4. Risks and Limitations 5. Evaluation 6. Recommendations 7. Glossary and Calculations 8. More Information 9. Model Card Authors Model Details ------------- ### Basics *This section provides information for anyone who wants to know about the model.* Click to expand Developed by: BigScience (website) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* Model Type: Transformer-based Language Model Version: 1.0.0 Languages: Multiple; see training data License: RAIL License v1.0 (link) Release Date Estimate: Monday, 11.July.2022 Send Questions to: bigscience-contact@URL Cite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022 Funded by: * The French government. * Hugging Face (website). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* ### Technical Specifications *This section provides information for people who work on model development.* Click to expand Please see the BLOOM training README for full details on replicating training. Model Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code): * Decoder-only architecture * Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper) * ALiBI positional encodings (see paper), with GeLU activation functions * 1,065,314,304 parameters: + 385,351,680 embedding parameters + 24 layers, 16 attention heads + Hidden layers are 1536-dimensional + Sequence length of 2048 tokens used (see BLOOM tokenizer, tokenizer description) Objective Function: Cross Entropy with mean reduction (see API documentation). Compute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement). * Hardware: 384 A100 80GB GPUs (48 nodes): + Additional 32 A100 80GB GPUs (4 nodes) in reserve + 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links + CPU: AMD + CPU memory: 512GB per node + GPU memory: 640GB per node + Inter-node connect: Omni-Path Architecture (OPA) + NCCL-communications network: a fully dedicated subnet + Disc IO network: shared network with other types of nodes * Software: + Megatron-DeepSpeed (Github link) + DeepSpeed (Github link) + PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link) + apex (Github link) #### Training Training logs: Tensorboard link * Number of epochs: 1 * Dates: + Started 11th March, 2022 11:42am PST + Ended 5th July, 2022 * Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes) * Server training location: Île-de-France, France #### Tokenization The BLOOM tokenizer (link) is a learned subword tokenizer trained using: * A byte-level Byte Pair Encoding (BPE) algorithm * A simple pre-tokenization rule, no normalization * A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. ### Environmental Impact Click to expand The training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. Estimated carbon emissions: *(Forthcoming upon completion of training.)* Estimated electricity usage: *(Forthcoming upon completion of training.)*   Uses ---- *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* Click to expand ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### Direct Use * Text generation * Exploring characteristics of language generated by a language model + Examples: Cloze tests, counterfactuals, generations with reframings #### Downstream Use * Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### Out-of-scope Uses Using the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: * Usage in biomedical domains, political and legal domains, or finance domains * Usage for evaluating or scoring individuals, such as for employment, education, or credit * Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### Misuse Intentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes: * Spam generation * Disinformation and influence operations * Disparagement and defamation * Harassment and abuse * Deception * Unconsented impersonation and imitation * Unconsented surveillance * Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions ### Intended Users #### Direct Users * General Public * Researchers * Students * Educators * Engineers/developers * Non-commercial entities * Community advocates, including human and civil rights groups #### Indirect Users * Users of derivatives created by Direct Users, such as those using software with an intended use * Users of Derivatives of the Model, as described in the License #### Others Affected (Parties Prenantes) * People and groups referred to by the LLM * People and groups exposed to outputs of, or decisions based on, the LLM * People and groups whose original work is included in the LLM   Training Data ------------- *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* Click to expand Details for each dataset are provided in individual Data Cards. Training data includes: * 45 natural languages * 12 programming languages * In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.) #### Languages The pie chart shows the distribution of languages in training data. !pie chart showing the distribution of languages in training data The following table shows the further distribution of Niger-Congo and Indic languages in the training data. Click to expand The following table shows the distribution of programming languages. Click to expand Extension: java, Language: Java, Number of files: 5,407,724 Extension: php, Language: PHP, Number of files: 4,942,186 Extension: cpp, Language: C++, Number of files: 2,503,930 Extension: py, Language: Python, Number of files: 2,435,072 Extension: js, Language: JavaScript, Number of files: 1,905,518 Extension: cs, Language: C#, Number of files: 1,577,347 Extension: rb, Language: Ruby, Number of files: 6,78,413 Extension: cc, Language: C++, Number of files: 443,054 Extension: hpp, Language: C++, Number of files: 391,048 Extension: lua, Language: Lua, Number of files: 352,317 Extension: go, Language: GO, Number of files: 227,763 Extension: ts, Language: TypeScript, Number of files: 195,254 Extension: C, Language: C, Number of files: 134,537 Extension: scala, Language: Scala, Number of files: 92,052 Extension: hh, Language: C++, Number of files: 67,161 Extension: H, Language: C++, Number of files: 55,899 Extension: tsx, Language: TypeScript, Number of files: 33,107 Extension: rs, Language: Rust, Number of files: 29,693 Extension: phpt, Language: PHP, Number of files: 9,702 Extension: c++, Language: C++, Number of files: 1,342 Extension: h++, Language: C++, Number of files: 791 Extension: php3, Language: PHP, Number of files: 540 Extension: phps, Language: PHP, Number of files: 270 Extension: php5, Language: PHP, Number of files: 166 Extension: php4, Language: PHP, Number of files: 29   Risks and Limitations --------------------- *This section identifies foreseeable harms and misunderstandings.* Click to expand Model may: * Overrepresent some viewpoints and underrepresent others * Contain stereotypes * Contain personal information * Generate: + Hateful, abusive, or violent language + Discriminatory or prejudicial language + Content that may not be appropriate for all settings, including sexual content * Make errors, including producing incorrect information as if it were factual * Generate irrelevant or repetitive outputs   Evaluation ---------- *This section describes the evaluation protocols and provides the results.* Click to expand ### Metrics *This section describes the different ways performance is calculated and why.* Includes: And multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)* ### Factors *This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* * Language, such as English or Yoruba * Domain, such as newswire or stories * Demographic characteristics, such as gender or nationality ### Results *Results are based on the Factors and Metrics.* Train-time Evaluation: As of 25.May.2022, 15:00 PST: * Training Loss: 2.7 * Validation Loss: 3.1 * Perplexity: 21.9 (More evaluation scores forthcoming at the end of model training.)   Recommendations --------------- *This section provides information on warnings and potential mitigations.* Click to expand * Indirect users should be made aware when the content they're working with is created by the LLM. * Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. * Models pretrained with the LLM should include an updated Model Card. * Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.   Glossary and Calculations ------------------------- *This section defines common terms and how metrics are calculated.* Click to expand * Loss: A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. * Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. * High-stakes settings: Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed Artificial Intelligence (AI) Act. * Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act. * Human rights: Includes those rights defined in the Universal Declaration of Human Rights. * Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as "personal data" in the European Union's General Data Protection Regulation; and "personal information" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law. * Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1) * Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.   More Information ---------------- Click to expand ### Dataset Creation Blog post detailing the design choices during the dataset creation: URL ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL More details on the architecture/optimizer: URL Blog post on the hardware/engineering side: URL Details on the distributed setup used for the training: URL Tensorboard updated during the training: URL Insights on how to approach training, negative results: URL Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL ### Initial Results Initial prompting experiments using interim checkpoints: URL   Model Card Authors ------------------ *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
[ "### Model Card\n\n\n![](URL alt=)\nVersion 1.0 / 26.May.2022\n\n\nTable of Contents\n-----------------\n\n\n1. Model Details\n2. Uses\n3. Training Data\n4. Risks and Limitations\n5. Evaluation\n6. Recommendations\n7. Glossary and Calculations\n8. More Information\n9. Model Card Authors\n\n\nModel Details\n-------------", "### Basics\n\n\n*This section provides information for anyone who wants to know about the model.*\n\n\n\nClick to expand \n\nDeveloped by: BigScience (website)\n\n\n* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*\n\n\nModel Type: Transformer-based Language Model\n\n\nVersion: 1.0.0\n\n\nLanguages: Multiple; see training data\n\n\nLicense: RAIL License v1.0 (link)\n\n\nRelease Date Estimate: Monday, 11.July.2022\n\n\nSend Questions to: bigscience-contact@URL\n\n\nCite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022\n\n\nFunded by:\n\n\n* The French government.\n* Hugging Face (website).\n* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*", "### Technical Specifications\n\n\n*This section provides information for people who work on model development.*\n\n\n\nClick to expand \n\nPlease see the BLOOM training README for full details on replicating training.\n\n\nModel Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code):\n\n\n* Decoder-only architecture\n* Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper)\n* ALiBI positional encodings (see paper), with GeLU activation functions\n* 1,065,314,304 parameters:\n\n\n\t+ 385,351,680 embedding parameters\n\t+ 24 layers, 16 attention heads\n\t+ Hidden layers are 1536-dimensional\n\t+ Sequence length of 2048 tokens used (see BLOOM tokenizer, tokenizer description)\n\n\nObjective Function: Cross Entropy with mean reduction (see API documentation).\n\n\nCompute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement).\n\n\n* Hardware: 384 A100 80GB GPUs (48 nodes):\n\n\n\t+ Additional 32 A100 80GB GPUs (4 nodes) in reserve\n\t+ 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links\n\t+ CPU: AMD\n\t+ CPU memory: 512GB per node\n\t+ GPU memory: 640GB per node\n\t+ Inter-node connect: Omni-Path Architecture (OPA)\n\t+ NCCL-communications network: a fully dedicated subnet\n\t+ Disc IO network: shared network with other types of nodes\n* Software:\n\n\n\t+ Megatron-DeepSpeed (Github link)\n\t+ DeepSpeed (Github link)\n\t+ PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link)\n\t+ apex (Github link)", "#### Training\n\n\nTraining logs: Tensorboard link\n\n\n* Number of epochs: 1\n* Dates:\n\n\n\t+ Started 11th March, 2022 11:42am PST\n\t+ Ended 5th July, 2022\n* Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)\n* Server training location: Île-de-France, France", "#### Tokenization\n\n\nThe BLOOM tokenizer (link) is a learned subword tokenizer trained using:\n\n\n* A byte-level Byte Pair Encoding (BPE) algorithm\n* A simple pre-tokenization rule, no normalization\n* A vocabulary size of 250,680\n\n\nIt was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.", "### Environmental Impact\n\n\n\nClick to expand \n\nThe training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.\n\n\nEstimated carbon emissions: *(Forthcoming upon completion of training.)*\n\n\nEstimated electricity usage: *(Forthcoming upon completion of training.)*\n\n\n\n \n\n\nUses\n----\n\n\n*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.\nIt provides information for anyone considering using the model or who is affected by the model.*\n\n\n\nClick to expand", "### Intended Use\n\n\nThis model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.", "#### Direct Use\n\n\n* Text generation\n* Exploring characteristics of language generated by a language model\n\n\n\t+ Examples: Cloze tests, counterfactuals, generations with reframings", "#### Downstream Use\n\n\n* Tasks that leverage language models include: Information Extraction, Question Answering, Summarization", "### Misuse and Out-of-scope Use\n\n\n*This section addresses what users ought not do with the model.*\n\n\nSee the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.", "#### Out-of-scope Uses\n\n\nUsing the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.", "##### Out-of-scope Uses Include:\n\n\n* Usage in biomedical domains, political and legal domains, or finance domains\n* Usage for evaluating or scoring individuals, such as for employment, education, or credit\n* Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct", "#### Misuse\n\n\nIntentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:\n\n\n* Spam generation\n* Disinformation and influence operations\n* Disparagement and defamation\n* Harassment and abuse\n* Deception\n* Unconsented impersonation and imitation\n* Unconsented surveillance\n* Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions", "### Intended Users", "#### Direct Users\n\n\n* General Public\n* Researchers\n* Students\n* Educators\n* Engineers/developers\n* Non-commercial entities\n* Community advocates, including human and civil rights groups", "#### Indirect Users\n\n\n* Users of derivatives created by Direct Users, such as those using software with an intended use\n* Users of Derivatives of the Model, as described in the License", "#### Others Affected (Parties Prenantes)\n\n\n* People and groups referred to by the LLM\n* People and groups exposed to outputs of, or decisions based on, the LLM\n* People and groups whose original work is included in the LLM\n\n\n\n \n\n\nTraining Data\n-------------\n\n\n*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*\n\n\n\nClick to expand \n\nDetails for each dataset are provided in individual Data Cards.\n\n\nTraining data includes:\n\n\n* 45 natural languages\n* 12 programming languages\n* In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)", "#### Languages\n\n\nThe pie chart shows the distribution of languages in training data.\n\n\n!pie chart showing the distribution of languages in training data\n\n\nThe following table shows the further distribution of Niger-Congo and Indic languages in the training data.\n\n\n\nClick to expand \n\n\n\nThe following table shows the distribution of programming languages.\n\n\n\nClick to expand \n\nExtension: java, Language: Java, Number of files: 5,407,724\nExtension: php, Language: PHP, Number of files: 4,942,186\nExtension: cpp, Language: C++, Number of files: 2,503,930\nExtension: py, Language: Python, Number of files: 2,435,072\nExtension: js, Language: JavaScript, Number of files: 1,905,518\nExtension: cs, Language: C#, Number of files: 1,577,347\nExtension: rb, Language: Ruby, Number of files: 6,78,413\nExtension: cc, Language: C++, Number of files: 443,054\nExtension: hpp, Language: C++, Number of files: 391,048\nExtension: lua, Language: Lua, Number of files: 352,317\nExtension: go, Language: GO, Number of files: 227,763\nExtension: ts, Language: TypeScript, Number of files: 195,254\nExtension: C, Language: C, Number of files: 134,537\nExtension: scala, Language: Scala, Number of files: 92,052\nExtension: hh, Language: C++, Number of files: 67,161\nExtension: H, Language: C++, Number of files: 55,899\nExtension: tsx, Language: TypeScript, Number of files: 33,107\nExtension: rs, Language: Rust, Number of files: 29,693\nExtension: phpt, Language: PHP, Number of files: 9,702\nExtension: c++, Language: C++, Number of files: 1,342\nExtension: h++, Language: C++, Number of files: 791\nExtension: php3, Language: PHP, Number of files: 540\nExtension: phps, Language: PHP, Number of files: 270\nExtension: php5, Language: PHP, Number of files: 166\nExtension: php4, Language: PHP, Number of files: 29\n\n\n\n\n \n\n\nRisks and Limitations\n---------------------\n\n\n*This section identifies foreseeable harms and misunderstandings.*\n\n\n\nClick to expand \n\nModel may:\n\n\n* Overrepresent some viewpoints and underrepresent others\n* Contain stereotypes\n* Contain personal information\n* Generate:\n\n\n\t+ Hateful, abusive, or violent language\n\t+ Discriminatory or prejudicial language\n\t+ Content that may not be appropriate for all settings, including sexual content\n* Make errors, including producing incorrect information as if it were factual\n* Generate irrelevant or repetitive outputs\n\n\n\n \n\n\nEvaluation\n----------\n\n\n*This section describes the evaluation protocols and provides the results.*\n\n\n\nClick to expand", "### Metrics\n\n\n*This section describes the different ways performance is calculated and why.*\n\n\nIncludes:\n\n\n\nAnd multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)*", "### Factors\n\n\n*This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*\n\n\n* Language, such as English or Yoruba\n* Domain, such as newswire or stories\n* Demographic characteristics, such as gender or nationality", "### Results\n\n\n*Results are based on the Factors and Metrics.*\n\n\nTrain-time Evaluation:\n\n\nAs of 25.May.2022, 15:00 PST:\n\n\n* Training Loss: 2.7\n* Validation Loss: 3.1\n* Perplexity: 21.9\n\n\n(More evaluation scores forthcoming at the end of model training.)\n\n\n\n \n\n\nRecommendations\n---------------\n\n\n*This section provides information on warnings and potential mitigations.*\n\n\n\nClick to expand \n\n* Indirect users should be made aware when the content they're working with is created by the LLM.\n* Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.\n* Models pretrained with the LLM should include an updated Model Card.\n* Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.\n\n\n\n \n\n\nGlossary and Calculations\n-------------------------\n\n\n*This section defines common terms and how metrics are calculated.*\n\n\n\nClick to expand \n\n* Loss: A calculation of the difference between what the model has learned and what the data shows (\"groundtruth\"). The lower the loss, the better. The training process aims to minimize the loss.\n* Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.\n* High-stakes settings: Such as those identified as \"high-risk AI systems\" and \"unacceptable risk AI systems\" in the European Union's proposed Artificial Intelligence (AI) Act.\n* Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act.\n* Human rights: Includes those rights defined in the Universal Declaration of Human Rights.\n* Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as \"personal data\" in the European Union's General Data Protection Regulation; and \"personal information\" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law.\n* Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1)\n* Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.\n\n\n\n \n\n\nMore Information\n----------------\n\n\n\nClick to expand", "### Dataset Creation\n\n\nBlog post detailing the design choices during the dataset creation: URL", "### Technical Specifications\n\n\nBlog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL\n\n\nMore details on the architecture/optimizer: URL\n\n\nBlog post on the hardware/engineering side: URL\n\n\nDetails on the distributed setup used for the training: URL\n\n\nTensorboard updated during the training: URL\n\n\nInsights on how to approach training, negative results: URL\n\n\nDetails on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL", "### Initial Results\n\n\nInitial prompting experiments using interim checkpoints: URL\n\n\n\n \n\n\nModel Card Authors\n------------------\n\n\n*Ordered roughly chronologically and by amount of time spent.*\n\n\nMargaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff" ]
[ "TAGS\n#transformers #safetensors #bloom #text-generation #arxiv-1909.08053 #arxiv-2110.02861 #arxiv-2108.12409 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "### Model Card\n\n\n![](URL alt=)\nVersion 1.0 / 26.May.2022\n\n\nTable of Contents\n-----------------\n\n\n1. Model Details\n2. Uses\n3. Training Data\n4. Risks and Limitations\n5. Evaluation\n6. Recommendations\n7. Glossary and Calculations\n8. More Information\n9. Model Card Authors\n\n\nModel Details\n-------------", "### Basics\n\n\n*This section provides information for anyone who wants to know about the model.*\n\n\n\nClick to expand \n\nDeveloped by: BigScience (website)\n\n\n* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*\n\n\nModel Type: Transformer-based Language Model\n\n\nVersion: 1.0.0\n\n\nLanguages: Multiple; see training data\n\n\nLicense: RAIL License v1.0 (link)\n\n\nRelease Date Estimate: Monday, 11.July.2022\n\n\nSend Questions to: bigscience-contact@URL\n\n\nCite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022\n\n\nFunded by:\n\n\n* The French government.\n* Hugging Face (website).\n* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*", "### Technical Specifications\n\n\n*This section provides information for people who work on model development.*\n\n\n\nClick to expand \n\nPlease see the BLOOM training README for full details on replicating training.\n\n\nModel Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code):\n\n\n* Decoder-only architecture\n* Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper)\n* ALiBI positional encodings (see paper), with GeLU activation functions\n* 1,065,314,304 parameters:\n\n\n\t+ 385,351,680 embedding parameters\n\t+ 24 layers, 16 attention heads\n\t+ Hidden layers are 1536-dimensional\n\t+ Sequence length of 2048 tokens used (see BLOOM tokenizer, tokenizer description)\n\n\nObjective Function: Cross Entropy with mean reduction (see API documentation).\n\n\nCompute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement).\n\n\n* Hardware: 384 A100 80GB GPUs (48 nodes):\n\n\n\t+ Additional 32 A100 80GB GPUs (4 nodes) in reserve\n\t+ 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links\n\t+ CPU: AMD\n\t+ CPU memory: 512GB per node\n\t+ GPU memory: 640GB per node\n\t+ Inter-node connect: Omni-Path Architecture (OPA)\n\t+ NCCL-communications network: a fully dedicated subnet\n\t+ Disc IO network: shared network with other types of nodes\n* Software:\n\n\n\t+ Megatron-DeepSpeed (Github link)\n\t+ DeepSpeed (Github link)\n\t+ PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link)\n\t+ apex (Github link)", "#### Training\n\n\nTraining logs: Tensorboard link\n\n\n* Number of epochs: 1\n* Dates:\n\n\n\t+ Started 11th March, 2022 11:42am PST\n\t+ Ended 5th July, 2022\n* Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)\n* Server training location: Île-de-France, France", "#### Tokenization\n\n\nThe BLOOM tokenizer (link) is a learned subword tokenizer trained using:\n\n\n* A byte-level Byte Pair Encoding (BPE) algorithm\n* A simple pre-tokenization rule, no normalization\n* A vocabulary size of 250,680\n\n\nIt was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.", "### Environmental Impact\n\n\n\nClick to expand \n\nThe training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.\n\n\nEstimated carbon emissions: *(Forthcoming upon completion of training.)*\n\n\nEstimated electricity usage: *(Forthcoming upon completion of training.)*\n\n\n\n \n\n\nUses\n----\n\n\n*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.\nIt provides information for anyone considering using the model or who is affected by the model.*\n\n\n\nClick to expand", "### Intended Use\n\n\nThis model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.", "#### Direct Use\n\n\n* Text generation\n* Exploring characteristics of language generated by a language model\n\n\n\t+ Examples: Cloze tests, counterfactuals, generations with reframings", "#### Downstream Use\n\n\n* Tasks that leverage language models include: Information Extraction, Question Answering, Summarization", "### Misuse and Out-of-scope Use\n\n\n*This section addresses what users ought not do with the model.*\n\n\nSee the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.", "#### Out-of-scope Uses\n\n\nUsing the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.", "##### Out-of-scope Uses Include:\n\n\n* Usage in biomedical domains, political and legal domains, or finance domains\n* Usage for evaluating or scoring individuals, such as for employment, education, or credit\n* Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct", "#### Misuse\n\n\nIntentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:\n\n\n* Spam generation\n* Disinformation and influence operations\n* Disparagement and defamation\n* Harassment and abuse\n* Deception\n* Unconsented impersonation and imitation\n* Unconsented surveillance\n* Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions", "### Intended Users", "#### Direct Users\n\n\n* General Public\n* Researchers\n* Students\n* Educators\n* Engineers/developers\n* Non-commercial entities\n* Community advocates, including human and civil rights groups", "#### Indirect Users\n\n\n* Users of derivatives created by Direct Users, such as those using software with an intended use\n* Users of Derivatives of the Model, as described in the License", "#### Others Affected (Parties Prenantes)\n\n\n* People and groups referred to by the LLM\n* People and groups exposed to outputs of, or decisions based on, the LLM\n* People and groups whose original work is included in the LLM\n\n\n\n \n\n\nTraining Data\n-------------\n\n\n*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*\n\n\n\nClick to expand \n\nDetails for each dataset are provided in individual Data Cards.\n\n\nTraining data includes:\n\n\n* 45 natural languages\n* 12 programming languages\n* In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)", "#### Languages\n\n\nThe pie chart shows the distribution of languages in training data.\n\n\n!pie chart showing the distribution of languages in training data\n\n\nThe following table shows the further distribution of Niger-Congo and Indic languages in the training data.\n\n\n\nClick to expand \n\n\n\nThe following table shows the distribution of programming languages.\n\n\n\nClick to expand \n\nExtension: java, Language: Java, Number of files: 5,407,724\nExtension: php, Language: PHP, Number of files: 4,942,186\nExtension: cpp, Language: C++, Number of files: 2,503,930\nExtension: py, Language: Python, Number of files: 2,435,072\nExtension: js, Language: JavaScript, Number of files: 1,905,518\nExtension: cs, Language: C#, Number of files: 1,577,347\nExtension: rb, Language: Ruby, Number of files: 6,78,413\nExtension: cc, Language: C++, Number of files: 443,054\nExtension: hpp, Language: C++, Number of files: 391,048\nExtension: lua, Language: Lua, Number of files: 352,317\nExtension: go, Language: GO, Number of files: 227,763\nExtension: ts, Language: TypeScript, Number of files: 195,254\nExtension: C, Language: C, Number of files: 134,537\nExtension: scala, Language: Scala, Number of files: 92,052\nExtension: hh, Language: C++, Number of files: 67,161\nExtension: H, Language: C++, Number of files: 55,899\nExtension: tsx, Language: TypeScript, Number of files: 33,107\nExtension: rs, Language: Rust, Number of files: 29,693\nExtension: phpt, Language: PHP, Number of files: 9,702\nExtension: c++, Language: C++, Number of files: 1,342\nExtension: h++, Language: C++, Number of files: 791\nExtension: php3, Language: PHP, Number of files: 540\nExtension: phps, Language: PHP, Number of files: 270\nExtension: php5, Language: PHP, Number of files: 166\nExtension: php4, Language: PHP, Number of files: 29\n\n\n\n\n \n\n\nRisks and Limitations\n---------------------\n\n\n*This section identifies foreseeable harms and misunderstandings.*\n\n\n\nClick to expand \n\nModel may:\n\n\n* Overrepresent some viewpoints and underrepresent others\n* Contain stereotypes\n* Contain personal information\n* Generate:\n\n\n\t+ Hateful, abusive, or violent language\n\t+ Discriminatory or prejudicial language\n\t+ Content that may not be appropriate for all settings, including sexual content\n* Make errors, including producing incorrect information as if it were factual\n* Generate irrelevant or repetitive outputs\n\n\n\n \n\n\nEvaluation\n----------\n\n\n*This section describes the evaluation protocols and provides the results.*\n\n\n\nClick to expand", "### Metrics\n\n\n*This section describes the different ways performance is calculated and why.*\n\n\nIncludes:\n\n\n\nAnd multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)*", "### Factors\n\n\n*This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*\n\n\n* Language, such as English or Yoruba\n* Domain, such as newswire or stories\n* Demographic characteristics, such as gender or nationality", "### Results\n\n\n*Results are based on the Factors and Metrics.*\n\n\nTrain-time Evaluation:\n\n\nAs of 25.May.2022, 15:00 PST:\n\n\n* Training Loss: 2.7\n* Validation Loss: 3.1\n* Perplexity: 21.9\n\n\n(More evaluation scores forthcoming at the end of model training.)\n\n\n\n \n\n\nRecommendations\n---------------\n\n\n*This section provides information on warnings and potential mitigations.*\n\n\n\nClick to expand \n\n* Indirect users should be made aware when the content they're working with is created by the LLM.\n* Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.\n* Models pretrained with the LLM should include an updated Model Card.\n* Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.\n\n\n\n \n\n\nGlossary and Calculations\n-------------------------\n\n\n*This section defines common terms and how metrics are calculated.*\n\n\n\nClick to expand \n\n* Loss: A calculation of the difference between what the model has learned and what the data shows (\"groundtruth\"). The lower the loss, the better. The training process aims to minimize the loss.\n* Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.\n* High-stakes settings: Such as those identified as \"high-risk AI systems\" and \"unacceptable risk AI systems\" in the European Union's proposed Artificial Intelligence (AI) Act.\n* Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act.\n* Human rights: Includes those rights defined in the Universal Declaration of Human Rights.\n* Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as \"personal data\" in the European Union's General Data Protection Regulation; and \"personal information\" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law.\n* Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1)\n* Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.\n\n\n\n \n\n\nMore Information\n----------------\n\n\n\nClick to expand", "### Dataset Creation\n\n\nBlog post detailing the design choices during the dataset creation: URL", "### Technical Specifications\n\n\nBlog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL\n\n\nMore details on the architecture/optimizer: URL\n\n\nBlog post on the hardware/engineering side: URL\n\n\nDetails on the distributed setup used for the training: URL\n\n\nTensorboard updated during the training: URL\n\n\nInsights on how to approach training, negative results: URL\n\n\nDetails on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL", "### Initial Results\n\n\nInitial prompting experiments using interim checkpoints: URL\n\n\n\n \n\n\nModel Card Authors\n------------------\n\n\n*Ordered roughly chronologically and by amount of time spent.*\n\n\nMargaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_EMP_H3K79me3-seqsight_4096_512_46M-L8_f This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset. It achieves the following results on the evaluation set: - Loss: 0.4237 - F1 Score: 0.8280 - Accuracy: 0.8280 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.4707 | 1.1 | 200 | 0.4291 | 0.8228 | 0.8228 | | 0.4371 | 2.21 | 400 | 0.4204 | 0.8165 | 0.8173 | | 0.4271 | 3.31 | 600 | 0.4264 | 0.8078 | 0.8096 | | 0.4137 | 4.42 | 800 | 0.4192 | 0.8204 | 0.8211 | | 0.411 | 5.52 | 1000 | 0.4207 | 0.8150 | 0.8166 | | 0.4007 | 6.63 | 1200 | 0.4399 | 0.8105 | 0.8128 | | 0.3999 | 7.73 | 1400 | 0.4176 | 0.8174 | 0.8187 | | 0.3908 | 8.84 | 1600 | 0.4468 | 0.7974 | 0.8006 | | 0.3868 | 9.94 | 1800 | 0.4142 | 0.8220 | 0.8221 | | 0.3813 | 11.05 | 2000 | 0.4262 | 0.8174 | 0.8176 | | 0.3789 | 12.15 | 2200 | 0.4150 | 0.8262 | 0.8263 | | 0.3671 | 13.26 | 2400 | 0.4240 | 0.8222 | 0.8225 | | 0.37 | 14.36 | 2600 | 0.4270 | 0.8283 | 0.8284 | | 0.3653 | 15.47 | 2800 | 0.4309 | 0.8262 | 0.8266 | | 0.3582 | 16.57 | 3000 | 0.4206 | 0.8243 | 0.8242 | | 0.3558 | 17.68 | 3200 | 0.4275 | 0.8240 | 0.8242 | | 0.353 | 18.78 | 3400 | 0.4302 | 0.8182 | 0.8190 | | 0.3482 | 19.89 | 3600 | 0.4251 | 0.8245 | 0.8245 | | 0.3458 | 20.99 | 3800 | 0.4363 | 0.8171 | 0.8176 | | 0.3426 | 22.1 | 4000 | 0.4343 | 0.8218 | 0.8218 | | 0.3387 | 23.2 | 4200 | 0.4497 | 0.8240 | 0.8242 | | 0.3376 | 24.31 | 4400 | 0.4404 | 0.8164 | 0.8173 | | 0.3276 | 25.41 | 4600 | 0.4517 | 0.8171 | 0.8169 | | 0.3318 | 26.52 | 4800 | 0.4462 | 0.8161 | 0.8166 | | 0.3271 | 27.62 | 5000 | 0.4527 | 0.8191 | 0.8197 | | 0.3246 | 28.73 | 5200 | 0.4728 | 0.8073 | 0.8086 | | 0.3195 | 29.83 | 5400 | 0.4470 | 0.8235 | 0.8235 | | 0.319 | 30.94 | 5600 | 0.4466 | 0.8217 | 0.8218 | | 0.3159 | 32.04 | 5800 | 0.4485 | 0.8236 | 0.8235 | | 0.3139 | 33.15 | 6000 | 0.4624 | 0.8213 | 0.8214 | | 0.3091 | 34.25 | 6200 | 0.4689 | 0.8156 | 0.8155 | | 0.31 | 35.36 | 6400 | 0.4868 | 0.8170 | 0.8176 | | 0.3063 | 36.46 | 6600 | 0.4621 | 0.8180 | 0.8183 | | 0.3038 | 37.57 | 6800 | 0.4723 | 0.8163 | 0.8169 | | 0.3037 | 38.67 | 7000 | 0.4809 | 0.8154 | 0.8159 | | 0.3012 | 39.78 | 7200 | 0.4831 | 0.8214 | 0.8218 | | 0.3 | 40.88 | 7400 | 0.4767 | 0.8175 | 0.8176 | | 0.2954 | 41.99 | 7600 | 0.4719 | 0.8132 | 0.8135 | | 0.2918 | 43.09 | 7800 | 0.4852 | 0.8149 | 0.8152 | | 0.2916 | 44.2 | 8000 | 0.4888 | 0.8163 | 0.8166 | | 0.2925 | 45.3 | 8200 | 0.4773 | 0.8154 | 0.8155 | | 0.2948 | 46.41 | 8400 | 0.4780 | 0.8172 | 0.8173 | | 0.2926 | 47.51 | 8600 | 0.4925 | 0.8150 | 0.8155 | | 0.2859 | 48.62 | 8800 | 0.4869 | 0.8142 | 0.8145 | | 0.284 | 49.72 | 9000 | 0.5006 | 0.8146 | 0.8148 | | 0.2889 | 50.83 | 9200 | 0.4914 | 0.8129 | 0.8135 | | 0.2856 | 51.93 | 9400 | 0.4951 | 0.8139 | 0.8141 | | 0.2832 | 53.04 | 9600 | 0.4966 | 0.8142 | 0.8145 | | 0.2822 | 54.14 | 9800 | 0.4966 | 0.8136 | 0.8138 | | 0.2826 | 55.25 | 10000 | 0.4992 | 0.8135 | 0.8138 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_4096_512_46M-L8_f", "results": []}]}
mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_4096_512_46M-L8_f
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_46M", "region:us" ]
null
2024-04-26T22:45:51+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
GUE\_EMP\_H3K79me3-seqsight\_4096\_512\_46M-L8\_f ================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset. It achieves the following results on the evaluation set: * Loss: 0.4237 * F1 Score: 0.8280 * Accuracy: 0.8280 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) bloom-1b1 - GGUF - Model creator: https://huggingface.co/bigscience/ - Original model: https://huggingface.co/bigscience/bloom-1b1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [bloom-1b1.Q2_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q2_K.gguf) | Q2_K | 0.66GB | | [bloom-1b1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.IQ3_XS.gguf) | IQ3_XS | 0.73GB | | [bloom-1b1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.IQ3_S.gguf) | IQ3_S | 0.73GB | | [bloom-1b1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q3_K_S.gguf) | Q3_K_S | 0.73GB | | [bloom-1b1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.IQ3_M.gguf) | IQ3_M | 0.77GB | | [bloom-1b1.Q3_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q3_K.gguf) | Q3_K | 0.79GB | | [bloom-1b1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q3_K_M.gguf) | Q3_K_M | 0.79GB | | [bloom-1b1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q3_K_L.gguf) | Q3_K_L | 0.82GB | | [bloom-1b1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.IQ4_XS.gguf) | IQ4_XS | 0.84GB | | [bloom-1b1.Q4_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q4_0.gguf) | Q4_0 | 0.87GB | | [bloom-1b1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.IQ4_NL.gguf) | IQ4_NL | 0.87GB | | [bloom-1b1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q4_K_S.gguf) | Q4_K_S | 0.87GB | | [bloom-1b1.Q4_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q4_K.gguf) | Q4_K | 0.91GB | | [bloom-1b1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q4_K_M.gguf) | Q4_K_M | 0.91GB | | [bloom-1b1.Q4_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q4_1.gguf) | Q4_1 | 0.93GB | | [bloom-1b1.Q5_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q5_0.gguf) | Q5_0 | 0.99GB | | [bloom-1b1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q5_K_S.gguf) | Q5_K_S | 0.99GB | | [bloom-1b1.Q5_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q5_K.gguf) | Q5_K | 1.02GB | | [bloom-1b1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q5_K_M.gguf) | Q5_K_M | 1.02GB | | [bloom-1b1.Q5_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q5_1.gguf) | Q5_1 | 1.05GB | | [bloom-1b1.Q6_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-1b1-gguf/blob/main/bloom-1b1.Q6_K.gguf) | Q6_K | 1.12GB | Original model description: --- license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zhs - zht - zu pipeline_tag: text-generation --- <h1 style='text-align: center '>BLOOM LM</h1> <h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2> <h3 style='text-align: center '>Model Card</h3> <img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Version 1.0 / 26.May.2022 ## Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Data](#training-data) 4. [Risks and Limitations](#risks-and-limitations) 5. [Evaluation](#evaluation) 6. [Recommendations](#recommendations) 7. [Glossary and Calculations](#glossary-and-calculations) 8. [More Information](#more-information) 9. [Model Card Authors](#model-card-authors) ## Model Details ### Basics *This section provides information for anyone who wants to know about the model.* <details> <summary>Click to expand</summary> <br/> **Developed by:** BigScience ([website](https://bigscience.huggingface.co)) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* **Model Type:** Transformer-based Language Model **Version:** 1.0.0 **Languages:** Multiple; see [training data](#training-data) **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license)) **Release Date Estimate:** Monday, 11.July.2022 **Send Questions to:** [email protected] **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022 **Funded by:** * The French government. * Hugging Face ([website](https://huggingface.co)). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* </details> ### Technical Specifications *This section provides information for people who work on model development.* <details> <summary>Click to expand</summary><br/> Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training. **Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)): * Decoder-only architecture * Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf)) * ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions * 1,065,314,304 parameters: * 385,351,680 embedding parameters * 24 layers, 16 attention heads * Hidden layers are 1536-dimensional * Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization)) **Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)). **Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)). * Hardware: 384 A100 80GB GPUs (48 nodes): * Additional 32 A100 80GB GPUs (4 nodes) in reserve * 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links * CPU: AMD * CPU memory: 512GB per node * GPU memory: 640GB per node * Inter-node connect: Omni-Path Architecture (OPA) * NCCL-communications network: a fully dedicated subnet * Disc IO network: shared network with other types of nodes * Software: * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed)) * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed)) * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch)) * apex ([Github link](https://github.com/NVIDIA/apex)) #### **Training** Training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11d-760M-logs) - Number of epochs: 1 - Dates: - Started 11th March, 2022 11:42am PST - Ended 5th July, 2022 - Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes) - Server training location: Île-de-France, France #### **Tokenization** The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using: - A byte-level Byte Pair Encoding (BPE) algorithm - A simple pre-tokenization rule, no normalization - A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. </details> ### Environmental Impact <details> <summary>Click to expand</summary><br/> The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. **Estimated carbon emissions:** *(Forthcoming upon completion of training.)* **Estimated electricity usage:** *(Forthcoming upon completion of training.)* </details> <p>&nbsp;</p> ## Uses *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* <details> <summary>Click to expand</summary><br/> ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### **Direct Use** - Text generation - Exploring characteristics of language generated by a language model - Examples: Cloze tests, counterfactuals, generations with reframings #### **Downstream Use** - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### **Out-of-scope Uses** Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.  The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: - Usage in biomedical domains, political and legal domains, or finance domains - Usage for evaluating or scoring individuals, such as for employment, education, or credit - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### **Misuse** Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes: - Spam generation - Disinformation and influence operations - Disparagement and defamation - Harassment and abuse - [Deception](#deception) - Unconsented impersonation and imitation - Unconsented surveillance - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license) ### Intended Users #### **Direct Users** - General Public - Researchers - Students - Educators - Engineers/developers - Non-commercial entities - Community advocates, including human and civil rights groups #### Indirect Users - Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use) - Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license) #### Others Affected (Parties Prenantes) - People and groups referred to by the LLM - People and groups exposed to outputs of, or decisions based on, the LLM - People and groups whose original work is included in the LLM </details> <p>&nbsp;</p> ## Training Data *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* <details> <summary>Click to expand</summary><br/> Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus). Training data includes: - 45 natural languages - 12 programming languages - In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.) #### **Languages** The pie chart shows the distribution of languages in training data. ![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true) The following table shows the further distribution of Niger-Congo and Indic languages in the training data. <details> <summary>Click to expand</summary><br/> | Niger Congo | Percentage | | Indic | Percentage | |----------------|------------ |------ |-----------|------------| | Chi Tumbuka | 0.00002 | | Assamese | 0.01 | | Kikuyu | 0.00004 | | Odia | 0.04 | | Bambara | 0.00004 | | Gujarati | 0.04 | | Akan | 0.00007 | | Marathi | 0.05 | | Xitsonga | 0.00007 | | Punjabi | 0.05 | | Sesotho | 0.00007 | | Kannada | 0.06 | | Chi Chewa | 0.0001 | | Nepali | 0.07 | | Setswana | 0.0002 | | Telugu | 0.09 | | Northern Sotho | 0.0002 | | Malayalam | 0.10 | | Fon | 0.0002 | | Urdu | 0.10 | | Kirundi | 0.0003 | | Tamil | 0.20 | | Wolof | 0.0004 | | Bengali | 0.50 | | Kuganda | 0.0004 | | Hindi | 0.70 | | Chi Shona | 0.001 | | Isi Zulu | 0.001 | | Igbo | 0.001 | | Xhosa | 0.001 | | Kinyarwanda | 0.003 | | Yoruba | 0.006 | | Swahili | 0.02 | </details> The following table shows the distribution of programming languages. <details> <summary>Click to expand</summary><br/> | Extension | Language | Number of files | |----------------|------------|-----------------| | java | Java | 5,407,724 | | php | PHP | 4,942,186 | | cpp | C++ | 2,503,930 | | py | Python | 2,435,072 | | js | JavaScript | 1,905,518 | | cs | C# | 1,577,347 | | rb | Ruby | 6,78,413 | | cc | C++ | 443,054 | | hpp | C++ | 391,048 | | lua | Lua | 352,317 | | go | GO | 227,763 | | ts | TypeScript | 195,254 | | C | C | 134,537 | | scala | Scala | 92,052 | | hh | C++ | 67,161 | | H | C++ | 55,899 | | tsx | TypeScript | 33,107 | | rs | Rust | 29,693 | | phpt | PHP | 9,702 | | c++ | C++ | 1,342 | | h++ | C++ | 791 | | php3 | PHP | 540 | | phps | PHP | 270 | | php5 | PHP | 166 | | php4 | PHP | 29 | </details> </details> <p>&nbsp;</p> ## Risks and Limitations *This section identifies foreseeable harms and misunderstandings.* <details> <summary>Click to expand</summary><br/> Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information as if it were factual - Generate irrelevant or repetitive outputs </details> <p>&nbsp;</p> ## Evaluation *This section describes the evaluation protocols and provides the results.* <details> <summary>Click to expand</summary><br/> ### Metrics *This section describes the different ways performance is calculated and why.* Includes: | Metric | Why chosen | |--------------------|--------------------------------------------------------------------| | [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training | | Cross Entropy [Loss](#loss) | Standard objective for language models. | And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_ ### Factors *This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* - Language, such as English or Yoruba - Domain, such as newswire or stories - Demographic characteristics, such as gender or nationality ### Results *Results are based on the [Factors](#factors) and [Metrics](#metrics).* **Train-time Evaluation:** As of 25.May.2022, 15:00 PST: - Training Loss: 2.7 - Validation Loss: 3.1 - Perplexity: 21.9 (More evaluation scores forthcoming at the end of model training.) </details> <p>&nbsp;</p> ## Recommendations *This section provides information on warnings and potential mitigations.* <details> <summary>Click to expand</summary><br/> - Indirect users should be made aware when the content they're working with is created by the LLM. - Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary. - Models pretrained with the LLM should include an updated Model Card. - Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. </details> <p>&nbsp;</p> ## Glossary and Calculations *This section defines common terms and how metrics are calculated.* <details> <summary>Click to expand</summary><br/> - <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. - <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. - <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/). - <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf). - <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf). - <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm). - <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf)) - <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. </details> <p>&nbsp;</p> ## More Information <details> <summary>Click to expand</summary><br/> ### Dataset Creation Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md ### Initial Results Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book </details> <p>&nbsp;</p> ## Model Card Authors *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
{}
RichardErkhov/bigscience_-_bloom-1b1-gguf
null
[ "gguf", "arxiv:1909.08053", "arxiv:2110.02861", "arxiv:2108.12409", "region:us" ]
null
2024-04-26T22:46:01+00:00
[ "1909.08053", "2110.02861", "2108.12409" ]
[]
TAGS #gguf #arxiv-1909.08053 #arxiv-2110.02861 #arxiv-2108.12409 #region-us
Quantization made by Richard Erkhov. Github Discord Request more models bloom-1b1 - GGUF * Model creator: URL * Original model: URL Name: bloom-1b1.Q2\_K.gguf, Quant method: Q2\_K, Size: 0.66GB Name: bloom-1b1.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 0.73GB Name: bloom-1b1.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 0.73GB Name: bloom-1b1.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 0.73GB Name: bloom-1b1.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 0.77GB Name: bloom-1b1.Q3\_K.gguf, Quant method: Q3\_K, Size: 0.79GB Name: bloom-1b1.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 0.79GB Name: bloom-1b1.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 0.82GB Name: bloom-1b1.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 0.84GB Name: bloom-1b1.Q4\_0.gguf, Quant method: Q4\_0, Size: 0.87GB Name: bloom-1b1.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 0.87GB Name: bloom-1b1.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 0.87GB Name: bloom-1b1.Q4\_K.gguf, Quant method: Q4\_K, Size: 0.91GB Name: bloom-1b1.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 0.91GB Name: bloom-1b1.Q4\_1.gguf, Quant method: Q4\_1, Size: 0.93GB Name: bloom-1b1.Q5\_0.gguf, Quant method: Q5\_0, Size: 0.99GB Name: bloom-1b1.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 0.99GB Name: bloom-1b1.Q5\_K.gguf, Quant method: Q5\_K, Size: 1.02GB Name: bloom-1b1.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 1.02GB Name: bloom-1b1.Q5\_1.gguf, Quant method: Q5\_1, Size: 1.05GB Name: bloom-1b1.Q6\_K.gguf, Quant method: Q6\_K, Size: 1.12GB Original model description: --------------------------- license: bigscience-bloom-rail-1.0 language: * ak * ar * as * bm * bn * ca * code * en * es * eu * fon * fr * gu * hi * id * ig * ki * kn * lg * ln * ml * mr * ne * nso * ny * or * pa * pt * rn * rw * sn * st * sw * ta * te * tn * ts * tum * tw * ur * vi * wo * xh * yo * zh * zhs * zht * zu pipeline\_tag: text-generation --- BLOOM LM ======== *BigScience Large Open-science Open-access Multilingual Language Model* ----------------------------------------------------------------------- ### Model Card ![](URL alt=) Version 1.0 / 26.May.2022 Table of Contents ----------------- 1. Model Details 2. Uses 3. Training Data 4. Risks and Limitations 5. Evaluation 6. Recommendations 7. Glossary and Calculations 8. More Information 9. Model Card Authors Model Details ------------- ### Basics *This section provides information for anyone who wants to know about the model.* Click to expand Developed by: BigScience (website) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* Model Type: Transformer-based Language Model Version: 1.0.0 Languages: Multiple; see training data License: RAIL License v1.0 (link) Release Date Estimate: Monday, 11.July.2022 Send Questions to: bigscience-contact@URL Cite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022 Funded by: * The French government. * Hugging Face (website). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* ### Technical Specifications *This section provides information for people who work on model development.* Click to expand Please see the BLOOM training README for full details on replicating training. Model Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code): * Decoder-only architecture * Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper) * ALiBI positional encodings (see paper), with GeLU activation functions * 1,065,314,304 parameters: + 385,351,680 embedding parameters + 24 layers, 16 attention heads + Hidden layers are 1536-dimensional + Sequence length of 2048 tokens used (see BLOOM tokenizer, tokenizer description) Objective Function: Cross Entropy with mean reduction (see API documentation). Compute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement). * Hardware: 384 A100 80GB GPUs (48 nodes): + Additional 32 A100 80GB GPUs (4 nodes) in reserve + 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links + CPU: AMD + CPU memory: 512GB per node + GPU memory: 640GB per node + Inter-node connect: Omni-Path Architecture (OPA) + NCCL-communications network: a fully dedicated subnet + Disc IO network: shared network with other types of nodes * Software: + Megatron-DeepSpeed (Github link) + DeepSpeed (Github link) + PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link) + apex (Github link) #### Training Training logs: Tensorboard link * Number of epochs: 1 * Dates: + Started 11th March, 2022 11:42am PST + Ended 5th July, 2022 * Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes) * Server training location: Île-de-France, France #### Tokenization The BLOOM tokenizer (link) is a learned subword tokenizer trained using: * A byte-level Byte Pair Encoding (BPE) algorithm * A simple pre-tokenization rule, no normalization * A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. ### Environmental Impact Click to expand The training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. Estimated carbon emissions: *(Forthcoming upon completion of training.)* Estimated electricity usage: *(Forthcoming upon completion of training.)*   Uses ---- *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* Click to expand ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### Direct Use * Text generation * Exploring characteristics of language generated by a language model + Examples: Cloze tests, counterfactuals, generations with reframings #### Downstream Use * Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### Out-of-scope Uses Using the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: * Usage in biomedical domains, political and legal domains, or finance domains * Usage for evaluating or scoring individuals, such as for employment, education, or credit * Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### Misuse Intentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes: * Spam generation * Disinformation and influence operations * Disparagement and defamation * Harassment and abuse * Deception * Unconsented impersonation and imitation * Unconsented surveillance * Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions ### Intended Users #### Direct Users * General Public * Researchers * Students * Educators * Engineers/developers * Non-commercial entities * Community advocates, including human and civil rights groups #### Indirect Users * Users of derivatives created by Direct Users, such as those using software with an intended use * Users of Derivatives of the Model, as described in the License #### Others Affected (Parties Prenantes) * People and groups referred to by the LLM * People and groups exposed to outputs of, or decisions based on, the LLM * People and groups whose original work is included in the LLM   Training Data ------------- *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* Click to expand Details for each dataset are provided in individual Data Cards. Training data includes: * 45 natural languages * 12 programming languages * In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.) #### Languages The pie chart shows the distribution of languages in training data. !pie chart showing the distribution of languages in training data The following table shows the further distribution of Niger-Congo and Indic languages in the training data. Click to expand The following table shows the distribution of programming languages. Click to expand Extension: java, Language: Java, Number of files: 5,407,724 Extension: php, Language: PHP, Number of files: 4,942,186 Extension: cpp, Language: C++, Number of files: 2,503,930 Extension: py, Language: Python, Number of files: 2,435,072 Extension: js, Language: JavaScript, Number of files: 1,905,518 Extension: cs, Language: C#, Number of files: 1,577,347 Extension: rb, Language: Ruby, Number of files: 6,78,413 Extension: cc, Language: C++, Number of files: 443,054 Extension: hpp, Language: C++, Number of files: 391,048 Extension: lua, Language: Lua, Number of files: 352,317 Extension: go, Language: GO, Number of files: 227,763 Extension: ts, Language: TypeScript, Number of files: 195,254 Extension: C, Language: C, Number of files: 134,537 Extension: scala, Language: Scala, Number of files: 92,052 Extension: hh, Language: C++, Number of files: 67,161 Extension: H, Language: C++, Number of files: 55,899 Extension: tsx, Language: TypeScript, Number of files: 33,107 Extension: rs, Language: Rust, Number of files: 29,693 Extension: phpt, Language: PHP, Number of files: 9,702 Extension: c++, Language: C++, Number of files: 1,342 Extension: h++, Language: C++, Number of files: 791 Extension: php3, Language: PHP, Number of files: 540 Extension: phps, Language: PHP, Number of files: 270 Extension: php5, Language: PHP, Number of files: 166 Extension: php4, Language: PHP, Number of files: 29   Risks and Limitations --------------------- *This section identifies foreseeable harms and misunderstandings.* Click to expand Model may: * Overrepresent some viewpoints and underrepresent others * Contain stereotypes * Contain personal information * Generate: + Hateful, abusive, or violent language + Discriminatory or prejudicial language + Content that may not be appropriate for all settings, including sexual content * Make errors, including producing incorrect information as if it were factual * Generate irrelevant or repetitive outputs   Evaluation ---------- *This section describes the evaluation protocols and provides the results.* Click to expand ### Metrics *This section describes the different ways performance is calculated and why.* Includes: And multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)* ### Factors *This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* * Language, such as English or Yoruba * Domain, such as newswire or stories * Demographic characteristics, such as gender or nationality ### Results *Results are based on the Factors and Metrics.* Train-time Evaluation: As of 25.May.2022, 15:00 PST: * Training Loss: 2.7 * Validation Loss: 3.1 * Perplexity: 21.9 (More evaluation scores forthcoming at the end of model training.)   Recommendations --------------- *This section provides information on warnings and potential mitigations.* Click to expand * Indirect users should be made aware when the content they're working with is created by the LLM. * Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. * Models pretrained with the LLM should include an updated Model Card. * Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.   Glossary and Calculations ------------------------- *This section defines common terms and how metrics are calculated.* Click to expand * Loss: A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. * Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. * High-stakes settings: Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed Artificial Intelligence (AI) Act. * Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act. * Human rights: Includes those rights defined in the Universal Declaration of Human Rights. * Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as "personal data" in the European Union's General Data Protection Regulation; and "personal information" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law. * Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1) * Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.   More Information ---------------- Click to expand ### Dataset Creation Blog post detailing the design choices during the dataset creation: URL ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL More details on the architecture/optimizer: URL Blog post on the hardware/engineering side: URL Details on the distributed setup used for the training: URL Tensorboard updated during the training: URL Insights on how to approach training, negative results: URL Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL ### Initial Results Initial prompting experiments using interim checkpoints: URL   Model Card Authors ------------------ *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
[ "### Model Card\n\n\n![](URL alt=)\nVersion 1.0 / 26.May.2022\n\n\nTable of Contents\n-----------------\n\n\n1. Model Details\n2. Uses\n3. Training Data\n4. Risks and Limitations\n5. Evaluation\n6. Recommendations\n7. Glossary and Calculations\n8. More Information\n9. Model Card Authors\n\n\nModel Details\n-------------", "### Basics\n\n\n*This section provides information for anyone who wants to know about the model.*\n\n\n\nClick to expand \n\nDeveloped by: BigScience (website)\n\n\n* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*\n\n\nModel Type: Transformer-based Language Model\n\n\nVersion: 1.0.0\n\n\nLanguages: Multiple; see training data\n\n\nLicense: RAIL License v1.0 (link)\n\n\nRelease Date Estimate: Monday, 11.July.2022\n\n\nSend Questions to: bigscience-contact@URL\n\n\nCite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022\n\n\nFunded by:\n\n\n* The French government.\n* Hugging Face (website).\n* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*", "### Technical Specifications\n\n\n*This section provides information for people who work on model development.*\n\n\n\nClick to expand \n\nPlease see the BLOOM training README for full details on replicating training.\n\n\nModel Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code):\n\n\n* Decoder-only architecture\n* Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper)\n* ALiBI positional encodings (see paper), with GeLU activation functions\n* 1,065,314,304 parameters:\n\n\n\t+ 385,351,680 embedding parameters\n\t+ 24 layers, 16 attention heads\n\t+ Hidden layers are 1536-dimensional\n\t+ Sequence length of 2048 tokens used (see BLOOM tokenizer, tokenizer description)\n\n\nObjective Function: Cross Entropy with mean reduction (see API documentation).\n\n\nCompute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement).\n\n\n* Hardware: 384 A100 80GB GPUs (48 nodes):\n\n\n\t+ Additional 32 A100 80GB GPUs (4 nodes) in reserve\n\t+ 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links\n\t+ CPU: AMD\n\t+ CPU memory: 512GB per node\n\t+ GPU memory: 640GB per node\n\t+ Inter-node connect: Omni-Path Architecture (OPA)\n\t+ NCCL-communications network: a fully dedicated subnet\n\t+ Disc IO network: shared network with other types of nodes\n* Software:\n\n\n\t+ Megatron-DeepSpeed (Github link)\n\t+ DeepSpeed (Github link)\n\t+ PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link)\n\t+ apex (Github link)", "#### Training\n\n\nTraining logs: Tensorboard link\n\n\n* Number of epochs: 1\n* Dates:\n\n\n\t+ Started 11th March, 2022 11:42am PST\n\t+ Ended 5th July, 2022\n* Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)\n* Server training location: Île-de-France, France", "#### Tokenization\n\n\nThe BLOOM tokenizer (link) is a learned subword tokenizer trained using:\n\n\n* A byte-level Byte Pair Encoding (BPE) algorithm\n* A simple pre-tokenization rule, no normalization\n* A vocabulary size of 250,680\n\n\nIt was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.", "### Environmental Impact\n\n\n\nClick to expand \n\nThe training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.\n\n\nEstimated carbon emissions: *(Forthcoming upon completion of training.)*\n\n\nEstimated electricity usage: *(Forthcoming upon completion of training.)*\n\n\n\n \n\n\nUses\n----\n\n\n*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.\nIt provides information for anyone considering using the model or who is affected by the model.*\n\n\n\nClick to expand", "### Intended Use\n\n\nThis model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.", "#### Direct Use\n\n\n* Text generation\n* Exploring characteristics of language generated by a language model\n\n\n\t+ Examples: Cloze tests, counterfactuals, generations with reframings", "#### Downstream Use\n\n\n* Tasks that leverage language models include: Information Extraction, Question Answering, Summarization", "### Misuse and Out-of-scope Use\n\n\n*This section addresses what users ought not do with the model.*\n\n\nSee the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.", "#### Out-of-scope Uses\n\n\nUsing the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.", "##### Out-of-scope Uses Include:\n\n\n* Usage in biomedical domains, political and legal domains, or finance domains\n* Usage for evaluating or scoring individuals, such as for employment, education, or credit\n* Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct", "#### Misuse\n\n\nIntentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:\n\n\n* Spam generation\n* Disinformation and influence operations\n* Disparagement and defamation\n* Harassment and abuse\n* Deception\n* Unconsented impersonation and imitation\n* Unconsented surveillance\n* Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions", "### Intended Users", "#### Direct Users\n\n\n* General Public\n* Researchers\n* Students\n* Educators\n* Engineers/developers\n* Non-commercial entities\n* Community advocates, including human and civil rights groups", "#### Indirect Users\n\n\n* Users of derivatives created by Direct Users, such as those using software with an intended use\n* Users of Derivatives of the Model, as described in the License", "#### Others Affected (Parties Prenantes)\n\n\n* People and groups referred to by the LLM\n* People and groups exposed to outputs of, or decisions based on, the LLM\n* People and groups whose original work is included in the LLM\n\n\n\n \n\n\nTraining Data\n-------------\n\n\n*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*\n\n\n\nClick to expand \n\nDetails for each dataset are provided in individual Data Cards.\n\n\nTraining data includes:\n\n\n* 45 natural languages\n* 12 programming languages\n* In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)", "#### Languages\n\n\nThe pie chart shows the distribution of languages in training data.\n\n\n!pie chart showing the distribution of languages in training data\n\n\nThe following table shows the further distribution of Niger-Congo and Indic languages in the training data.\n\n\n\nClick to expand \n\n\n\nThe following table shows the distribution of programming languages.\n\n\n\nClick to expand \n\nExtension: java, Language: Java, Number of files: 5,407,724\nExtension: php, Language: PHP, Number of files: 4,942,186\nExtension: cpp, Language: C++, Number of files: 2,503,930\nExtension: py, Language: Python, Number of files: 2,435,072\nExtension: js, Language: JavaScript, Number of files: 1,905,518\nExtension: cs, Language: C#, Number of files: 1,577,347\nExtension: rb, Language: Ruby, Number of files: 6,78,413\nExtension: cc, Language: C++, Number of files: 443,054\nExtension: hpp, Language: C++, Number of files: 391,048\nExtension: lua, Language: Lua, Number of files: 352,317\nExtension: go, Language: GO, Number of files: 227,763\nExtension: ts, Language: TypeScript, Number of files: 195,254\nExtension: C, Language: C, Number of files: 134,537\nExtension: scala, Language: Scala, Number of files: 92,052\nExtension: hh, Language: C++, Number of files: 67,161\nExtension: H, Language: C++, Number of files: 55,899\nExtension: tsx, Language: TypeScript, Number of files: 33,107\nExtension: rs, Language: Rust, Number of files: 29,693\nExtension: phpt, Language: PHP, Number of files: 9,702\nExtension: c++, Language: C++, Number of files: 1,342\nExtension: h++, Language: C++, Number of files: 791\nExtension: php3, Language: PHP, Number of files: 540\nExtension: phps, Language: PHP, Number of files: 270\nExtension: php5, Language: PHP, Number of files: 166\nExtension: php4, Language: PHP, Number of files: 29\n\n\n\n\n \n\n\nRisks and Limitations\n---------------------\n\n\n*This section identifies foreseeable harms and misunderstandings.*\n\n\n\nClick to expand \n\nModel may:\n\n\n* Overrepresent some viewpoints and underrepresent others\n* Contain stereotypes\n* Contain personal information\n* Generate:\n\n\n\t+ Hateful, abusive, or violent language\n\t+ Discriminatory or prejudicial language\n\t+ Content that may not be appropriate for all settings, including sexual content\n* Make errors, including producing incorrect information as if it were factual\n* Generate irrelevant or repetitive outputs\n\n\n\n \n\n\nEvaluation\n----------\n\n\n*This section describes the evaluation protocols and provides the results.*\n\n\n\nClick to expand", "### Metrics\n\n\n*This section describes the different ways performance is calculated and why.*\n\n\nIncludes:\n\n\n\nAnd multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)*", "### Factors\n\n\n*This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*\n\n\n* Language, such as English or Yoruba\n* Domain, such as newswire or stories\n* Demographic characteristics, such as gender or nationality", "### Results\n\n\n*Results are based on the Factors and Metrics.*\n\n\nTrain-time Evaluation:\n\n\nAs of 25.May.2022, 15:00 PST:\n\n\n* Training Loss: 2.7\n* Validation Loss: 3.1\n* Perplexity: 21.9\n\n\n(More evaluation scores forthcoming at the end of model training.)\n\n\n\n \n\n\nRecommendations\n---------------\n\n\n*This section provides information on warnings and potential mitigations.*\n\n\n\nClick to expand \n\n* Indirect users should be made aware when the content they're working with is created by the LLM.\n* Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.\n* Models pretrained with the LLM should include an updated Model Card.\n* Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.\n\n\n\n \n\n\nGlossary and Calculations\n-------------------------\n\n\n*This section defines common terms and how metrics are calculated.*\n\n\n\nClick to expand \n\n* Loss: A calculation of the difference between what the model has learned and what the data shows (\"groundtruth\"). The lower the loss, the better. The training process aims to minimize the loss.\n* Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.\n* High-stakes settings: Such as those identified as \"high-risk AI systems\" and \"unacceptable risk AI systems\" in the European Union's proposed Artificial Intelligence (AI) Act.\n* Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act.\n* Human rights: Includes those rights defined in the Universal Declaration of Human Rights.\n* Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as \"personal data\" in the European Union's General Data Protection Regulation; and \"personal information\" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law.\n* Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1)\n* Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.\n\n\n\n \n\n\nMore Information\n----------------\n\n\n\nClick to expand", "### Dataset Creation\n\n\nBlog post detailing the design choices during the dataset creation: URL", "### Technical Specifications\n\n\nBlog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL\n\n\nMore details on the architecture/optimizer: URL\n\n\nBlog post on the hardware/engineering side: URL\n\n\nDetails on the distributed setup used for the training: URL\n\n\nTensorboard updated during the training: URL\n\n\nInsights on how to approach training, negative results: URL\n\n\nDetails on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL", "### Initial Results\n\n\nInitial prompting experiments using interim checkpoints: URL\n\n\n\n \n\n\nModel Card Authors\n------------------\n\n\n*Ordered roughly chronologically and by amount of time spent.*\n\n\nMargaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff" ]
[ "TAGS\n#gguf #arxiv-1909.08053 #arxiv-2110.02861 #arxiv-2108.12409 #region-us \n", "### Model Card\n\n\n![](URL alt=)\nVersion 1.0 / 26.May.2022\n\n\nTable of Contents\n-----------------\n\n\n1. Model Details\n2. Uses\n3. Training Data\n4. Risks and Limitations\n5. Evaluation\n6. Recommendations\n7. Glossary and Calculations\n8. More Information\n9. Model Card Authors\n\n\nModel Details\n-------------", "### Basics\n\n\n*This section provides information for anyone who wants to know about the model.*\n\n\n\nClick to expand \n\nDeveloped by: BigScience (website)\n\n\n* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*\n\n\nModel Type: Transformer-based Language Model\n\n\nVersion: 1.0.0\n\n\nLanguages: Multiple; see training data\n\n\nLicense: RAIL License v1.0 (link)\n\n\nRelease Date Estimate: Monday, 11.July.2022\n\n\nSend Questions to: bigscience-contact@URL\n\n\nCite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022\n\n\nFunded by:\n\n\n* The French government.\n* Hugging Face (website).\n* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*", "### Technical Specifications\n\n\n*This section provides information for people who work on model development.*\n\n\n\nClick to expand \n\nPlease see the BLOOM training README for full details on replicating training.\n\n\nModel Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code):\n\n\n* Decoder-only architecture\n* Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper)\n* ALiBI positional encodings (see paper), with GeLU activation functions\n* 1,065,314,304 parameters:\n\n\n\t+ 385,351,680 embedding parameters\n\t+ 24 layers, 16 attention heads\n\t+ Hidden layers are 1536-dimensional\n\t+ Sequence length of 2048 tokens used (see BLOOM tokenizer, tokenizer description)\n\n\nObjective Function: Cross Entropy with mean reduction (see API documentation).\n\n\nCompute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement).\n\n\n* Hardware: 384 A100 80GB GPUs (48 nodes):\n\n\n\t+ Additional 32 A100 80GB GPUs (4 nodes) in reserve\n\t+ 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links\n\t+ CPU: AMD\n\t+ CPU memory: 512GB per node\n\t+ GPU memory: 640GB per node\n\t+ Inter-node connect: Omni-Path Architecture (OPA)\n\t+ NCCL-communications network: a fully dedicated subnet\n\t+ Disc IO network: shared network with other types of nodes\n* Software:\n\n\n\t+ Megatron-DeepSpeed (Github link)\n\t+ DeepSpeed (Github link)\n\t+ PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link)\n\t+ apex (Github link)", "#### Training\n\n\nTraining logs: Tensorboard link\n\n\n* Number of epochs: 1\n* Dates:\n\n\n\t+ Started 11th March, 2022 11:42am PST\n\t+ Ended 5th July, 2022\n* Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)\n* Server training location: Île-de-France, France", "#### Tokenization\n\n\nThe BLOOM tokenizer (link) is a learned subword tokenizer trained using:\n\n\n* A byte-level Byte Pair Encoding (BPE) algorithm\n* A simple pre-tokenization rule, no normalization\n* A vocabulary size of 250,680\n\n\nIt was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.", "### Environmental Impact\n\n\n\nClick to expand \n\nThe training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.\n\n\nEstimated carbon emissions: *(Forthcoming upon completion of training.)*\n\n\nEstimated electricity usage: *(Forthcoming upon completion of training.)*\n\n\n\n \n\n\nUses\n----\n\n\n*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.\nIt provides information for anyone considering using the model or who is affected by the model.*\n\n\n\nClick to expand", "### Intended Use\n\n\nThis model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.", "#### Direct Use\n\n\n* Text generation\n* Exploring characteristics of language generated by a language model\n\n\n\t+ Examples: Cloze tests, counterfactuals, generations with reframings", "#### Downstream Use\n\n\n* Tasks that leverage language models include: Information Extraction, Question Answering, Summarization", "### Misuse and Out-of-scope Use\n\n\n*This section addresses what users ought not do with the model.*\n\n\nSee the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.", "#### Out-of-scope Uses\n\n\nUsing the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.", "##### Out-of-scope Uses Include:\n\n\n* Usage in biomedical domains, political and legal domains, or finance domains\n* Usage for evaluating or scoring individuals, such as for employment, education, or credit\n* Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct", "#### Misuse\n\n\nIntentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:\n\n\n* Spam generation\n* Disinformation and influence operations\n* Disparagement and defamation\n* Harassment and abuse\n* Deception\n* Unconsented impersonation and imitation\n* Unconsented surveillance\n* Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions", "### Intended Users", "#### Direct Users\n\n\n* General Public\n* Researchers\n* Students\n* Educators\n* Engineers/developers\n* Non-commercial entities\n* Community advocates, including human and civil rights groups", "#### Indirect Users\n\n\n* Users of derivatives created by Direct Users, such as those using software with an intended use\n* Users of Derivatives of the Model, as described in the License", "#### Others Affected (Parties Prenantes)\n\n\n* People and groups referred to by the LLM\n* People and groups exposed to outputs of, or decisions based on, the LLM\n* People and groups whose original work is included in the LLM\n\n\n\n \n\n\nTraining Data\n-------------\n\n\n*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*\n\n\n\nClick to expand \n\nDetails for each dataset are provided in individual Data Cards.\n\n\nTraining data includes:\n\n\n* 45 natural languages\n* 12 programming languages\n* In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)", "#### Languages\n\n\nThe pie chart shows the distribution of languages in training data.\n\n\n!pie chart showing the distribution of languages in training data\n\n\nThe following table shows the further distribution of Niger-Congo and Indic languages in the training data.\n\n\n\nClick to expand \n\n\n\nThe following table shows the distribution of programming languages.\n\n\n\nClick to expand \n\nExtension: java, Language: Java, Number of files: 5,407,724\nExtension: php, Language: PHP, Number of files: 4,942,186\nExtension: cpp, Language: C++, Number of files: 2,503,930\nExtension: py, Language: Python, Number of files: 2,435,072\nExtension: js, Language: JavaScript, Number of files: 1,905,518\nExtension: cs, Language: C#, Number of files: 1,577,347\nExtension: rb, Language: Ruby, Number of files: 6,78,413\nExtension: cc, Language: C++, Number of files: 443,054\nExtension: hpp, Language: C++, Number of files: 391,048\nExtension: lua, Language: Lua, Number of files: 352,317\nExtension: go, Language: GO, Number of files: 227,763\nExtension: ts, Language: TypeScript, Number of files: 195,254\nExtension: C, Language: C, Number of files: 134,537\nExtension: scala, Language: Scala, Number of files: 92,052\nExtension: hh, Language: C++, Number of files: 67,161\nExtension: H, Language: C++, Number of files: 55,899\nExtension: tsx, Language: TypeScript, Number of files: 33,107\nExtension: rs, Language: Rust, Number of files: 29,693\nExtension: phpt, Language: PHP, Number of files: 9,702\nExtension: c++, Language: C++, Number of files: 1,342\nExtension: h++, Language: C++, Number of files: 791\nExtension: php3, Language: PHP, Number of files: 540\nExtension: phps, Language: PHP, Number of files: 270\nExtension: php5, Language: PHP, Number of files: 166\nExtension: php4, Language: PHP, Number of files: 29\n\n\n\n\n \n\n\nRisks and Limitations\n---------------------\n\n\n*This section identifies foreseeable harms and misunderstandings.*\n\n\n\nClick to expand \n\nModel may:\n\n\n* Overrepresent some viewpoints and underrepresent others\n* Contain stereotypes\n* Contain personal information\n* Generate:\n\n\n\t+ Hateful, abusive, or violent language\n\t+ Discriminatory or prejudicial language\n\t+ Content that may not be appropriate for all settings, including sexual content\n* Make errors, including producing incorrect information as if it were factual\n* Generate irrelevant or repetitive outputs\n\n\n\n \n\n\nEvaluation\n----------\n\n\n*This section describes the evaluation protocols and provides the results.*\n\n\n\nClick to expand", "### Metrics\n\n\n*This section describes the different ways performance is calculated and why.*\n\n\nIncludes:\n\n\n\nAnd multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)*", "### Factors\n\n\n*This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*\n\n\n* Language, such as English or Yoruba\n* Domain, such as newswire or stories\n* Demographic characteristics, such as gender or nationality", "### Results\n\n\n*Results are based on the Factors and Metrics.*\n\n\nTrain-time Evaluation:\n\n\nAs of 25.May.2022, 15:00 PST:\n\n\n* Training Loss: 2.7\n* Validation Loss: 3.1\n* Perplexity: 21.9\n\n\n(More evaluation scores forthcoming at the end of model training.)\n\n\n\n \n\n\nRecommendations\n---------------\n\n\n*This section provides information on warnings and potential mitigations.*\n\n\n\nClick to expand \n\n* Indirect users should be made aware when the content they're working with is created by the LLM.\n* Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.\n* Models pretrained with the LLM should include an updated Model Card.\n* Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.\n\n\n\n \n\n\nGlossary and Calculations\n-------------------------\n\n\n*This section defines common terms and how metrics are calculated.*\n\n\n\nClick to expand \n\n* Loss: A calculation of the difference between what the model has learned and what the data shows (\"groundtruth\"). The lower the loss, the better. The training process aims to minimize the loss.\n* Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.\n* High-stakes settings: Such as those identified as \"high-risk AI systems\" and \"unacceptable risk AI systems\" in the European Union's proposed Artificial Intelligence (AI) Act.\n* Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act.\n* Human rights: Includes those rights defined in the Universal Declaration of Human Rights.\n* Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as \"personal data\" in the European Union's General Data Protection Regulation; and \"personal information\" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law.\n* Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1)\n* Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.\n\n\n\n \n\n\nMore Information\n----------------\n\n\n\nClick to expand", "### Dataset Creation\n\n\nBlog post detailing the design choices during the dataset creation: URL", "### Technical Specifications\n\n\nBlog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL\n\n\nMore details on the architecture/optimizer: URL\n\n\nBlog post on the hardware/engineering side: URL\n\n\nDetails on the distributed setup used for the training: URL\n\n\nTensorboard updated during the training: URL\n\n\nInsights on how to approach training, negative results: URL\n\n\nDetails on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL", "### Initial Results\n\n\nInitial prompting experiments using interim checkpoints: URL\n\n\n\n \n\n\nModel Card Authors\n------------------\n\n\n*Ordered roughly chronologically and by amount of time spent.*\n\n\nMargaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) bloom-1b7 - bnb 4bits - Model creator: https://huggingface.co/bigscience/ - Original model: https://huggingface.co/bigscience/bloom-1b7/ Original model description: --- license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zhs - zht - zu pipeline_tag: text-generation --- <h1 style='text-align: center '>BLOOM LM</h1> <h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2> <h3 style='text-align: center '>Model Card</h3> <img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Version 1.0 / 26.May.2022 # Model Card for Bloom-1b7 <!-- Provide a quick summary of what the model is/does. --> ## Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Recommendations](#recommendations) 5. [Training Data](#training-data) 6. [Evaluation](#evaluation) 7. [Environmental Impact](#environmental-impact) 8. [Technical Specifications](#techincal-specifications) 9. [Citation](#citation) 10. [Glossary and Calculations](#glossary-and-calculations) 11. [More Information](#more-information) 12. [Model Card Authors](#model-card-authors) 13. [Model Card Contact](#model-card-contact) ## Model Details ### Model Description *This section provides information for anyone who wants to know about the model.* - **Developed by:** BigScience ([website](https://bigscience.huggingface.co)) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* - **Model Type:** Transformer-based Language Model - **Version:** 1.0.0 - **Languages:** Multiple; see [training data](#training-data) - **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license)) - **Release Date Estimate:** Monday, 11.July.2022 - **Funded by:** * The French government. * Hugging Face ([website](https://huggingface.co)). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* ## Uses *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### **Direct Use** - Text generation - Exploring characteristics of language generated by a language model - Examples: Cloze tests, counterfactuals, generations with reframings #### **Downstream Use** - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### **Out-of-scope Uses** Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.  The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: - Usage in biomedical domains, political and legal domains, or finance domains - Usage for evaluating or scoring individuals, such as for employment, education, or credit - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### **Misuse** Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes: - Spam generation - Disinformation and influence operations - Disparagement and defamation - Harassment and abuse - [Deception](#deception) - Unconsented impersonation and imitation - Unconsented surveillance - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license) ### Intended Users #### **Direct Users** - General Public - Researchers - Students - Educators - Engineers/developers - Non-commercial entities - Community advocates, including human and civil rights groups #### Indirect Users - Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use) - Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license) #### Others Affected (Parties Prenantes) - People and groups referred to by the LLM - People and groups exposed to outputs of, or decisions based on, the LLM - People and groups whose original work is included in the LLM ## Bias, Risks, and Limitations *This section identifies foreseeable harms and misunderstandings.* Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information as if it were factual - Generate irrelevant or repetitive outputs ### Recommendations *This section provides information on warnings and potential mitigations.* - Indirect users should be made aware when the content they're working with is created by the LLM. - Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary. - Models pretrained with the LLM should include an updated Model Card. - Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. ## Training Data *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus). Training data includes: - 45 natural languages - 12 programming languages - In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.) #### **Languages** The pie chart shows the distribution of languages in training data. ![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true) **The following table shows the further distribution of Niger-Congo and Indic languages in the training data.** | Niger Congo | Percentage | | Indic | Percentage | |----------------|------------ |------ |-----------|------------| | Chi Tumbuka | 0.00002 | | Assamese | 0.01 | | Kikuyu | 0.00004 | | Odia | 0.04 | | Bambara | 0.00004 | | Gujarati | 0.04 | | Akan | 0.00007 | | Marathi | 0.05 | | Xitsonga | 0.00007 | | Punjabi | 0.05 | | Sesotho | 0.00007 | | Kannada | 0.06 | | Chi Chewa | 0.0001 | | Nepali | 0.07 | | Setswana | 0.0002 | | Telugu | 0.09 | | Northern Sotho | 0.0002 | | Malayalam | 0.10 | | Fon | 0.0002 | | Urdu | 0.10 | | Kirundi | 0.0003 | | Tamil | 0.20 | | Wolof | 0.0004 | | Bengali | 0.50 | | Kuganda | 0.0004 | | Hindi | 0.70 | | Chi Shona | 0.001 | | Isi Zulu | 0.001 | | Igbo | 0.001 | | Xhosa | 0.001 | | Kinyarwanda | 0.003 | | Yoruba | 0.006 | | Swahili | 0.02 | </details> **The following table shows the distribution of programming languages.** | Extension | Language | Number of files | |----------------|------------|-----------------| | java | Java | 5,407,724 | | php | PHP | 4,942,186 | | cpp | C++ | 2,503,930 | | py | Python | 2,435,072 | | js | JavaScript | 1,905,518 | | cs | C# | 1,577,347 | | rb | Ruby | 6,78,413 | | cc | C++ | 443,054 | | hpp | C++ | 391,048 | | lua | Lua | 352,317 | | go | GO | 227,763 | | ts | TypeScript | 195,254 | | C | C | 134,537 | | scala | Scala | 92,052 | | hh | C++ | 67,161 | | H | C++ | 55,899 | | tsx | TypeScript | 33,107 | | rs | Rust | 29,693 | | phpt | PHP | 9,702 | | c++ | C++ | 1,342 | | h++ | C++ | 791 | | php3 | PHP | 540 | | phps | PHP | 270 | | php5 | PHP | 166 | | php4 | PHP | 29 | ## Evaluation *This section describes the evaluation protocols and provides the results.* ### Metrics *This section describes the different ways performance is calculated and why.* Includes: | Metric | Why chosen | |--------------------|--------------------------------------------------------------------| | [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training | | Cross Entropy [Loss](#loss) | Standard objective for language models. | And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_ ### Factors *This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* - Language, such as English or Yoruba - Domain, such as newswire or stories - Demographic characteristics, such as gender or nationality ### Results *Results are based on the [Factors](#factors) and [Metrics](#metrics).* **Train-time Evaluation:** As of 25.May.2022, 15:00 PST: - Training Loss: 2.0 - Validation Loss: 2.2 - Perplexity: 8.9 (More evaluation scores forthcoming at the end of model training.) - [BLOOM Book](https://huggingface.co/spaces/bigscience/bloom-book): Read generations from BLOOM based on prompts provided by the community ## Environmental Impact The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. **Estimated carbon emissions:** *(Forthcoming upon completion of training.)* **Estimated electricity usage:** *(Forthcoming upon completion of training.)* ## Technical Specifications *This section provides information for people who work on model development.* Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training. **Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)): * Decoder-only architecture * Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf)) * ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions * 1,722,408,960 parameters: * 513,802,240 embedding parameters * 24 layers, 16 attention heads * Hidden layers are 2048-dimensional * Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization)) **Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)). **Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)). * Hardware: 64 V100 16/32GB GPUs (16 nodes): * 4 GPUs per node * 40 CPUs per task * 1 task per node * CPU: AMD * CPU memory: 160GB per node * GPU memory: 64GB or 128GB (depending on node availability during training) per node * Inter-node connect: Omni-Path Architecture (OPA) * NCCL-communications network: a fully dedicated subnet * Disc IO network: shared network with other types of nodes * Software: * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed)) * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed)) * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch)) * apex ([Github link](https://github.com/NVIDIA/apex)) ### **Training** - Checkpoint size: - Fp16 weights: 2.6GB (# params * 2) - Full checkpoint with optimizer states: -- - Training throughput: -- - Number of epochs: 1 - Dates: - Start: 11th March, 2022 11:42am PST - End: 20 May, 2022 - Server training location: Île-de-France, France ### **Tokenization** The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using: - A byte-level Byte Pair Encoding (BPE) algorithm - A simple pre-tokenization rule, no normalization - A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. ## Citation **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022 ## Glossary and Calculations *This section defines common terms and how metrics are calculated.* - <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. - <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. - <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/). - <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf). - <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf). - <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm). - <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf)) - <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. ## More Information ### Dataset Creation Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md ### Initial Results Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book ## Model Card Authors *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff ## Model Card Contact **Send Questions to:** [email protected]
{}
RichardErkhov/bigscience_-_bloom-1b7-4bits
null
[ "transformers", "safetensors", "bloom", "text-generation", "arxiv:1909.08053", "arxiv:2110.02861", "arxiv:2108.12409", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T22:46:11+00:00
[ "1909.08053", "2110.02861", "2108.12409" ]
[]
TAGS #transformers #safetensors #bloom #text-generation #arxiv-1909.08053 #arxiv-2110.02861 #arxiv-2108.12409 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models bloom-1b7 - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: bigscience-bloom-rail-1.0 language: * ak * ar * as * bm * bn * ca * code * en * es * eu * fon * fr * gu * hi * id * ig * ki * kn * lg * ln * ml * mr * ne * nso * ny * or * pa * pt * rn * rw * sn * st * sw * ta * te * tn * ts * tum * tw * ur * vi * wo * xh * yo * zh * zhs * zht * zu pipeline\_tag: text-generation --- BLOOM LM ======== *BigScience Large Open-science Open-access Multilingual Language Model* ----------------------------------------------------------------------- ### Model Card ![](URL alt=) Version 1.0 / 26.May.2022 Model Card for Bloom-1b7 ======================== Table of Contents ----------------- 1. Model Details 2. Uses 3. Bias, Risks, and Limitations 4. Recommendations 5. Training Data 6. Evaluation 7. Environmental Impact 8. Technical Specifications 9. Citation 10. Glossary and Calculations 11. More Information 12. Model Card Authors 13. Model Card Contact Model Details ------------- ### Model Description *This section provides information for anyone who wants to know about the model.* * Developed by: BigScience (website) + All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* * Model Type: Transformer-based Language Model * Version: 1.0.0 * Languages: Multiple; see training data * License: RAIL License v1.0 (link) * Release Date Estimate: Monday, 11.July.2022 * Funded by: + The French government. + Hugging Face (website). + Organizations of contributors. *(Further breakdown of organizations forthcoming.)* Uses ---- *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### Direct Use * Text generation * Exploring characteristics of language generated by a language model + Examples: Cloze tests, counterfactuals, generations with reframings #### Downstream Use * Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### Out-of-scope Uses Using the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: * Usage in biomedical domains, political and legal domains, or finance domains * Usage for evaluating or scoring individuals, such as for employment, education, or credit * Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### Misuse Intentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes: * Spam generation * Disinformation and influence operations * Disparagement and defamation * Harassment and abuse * Deception * Unconsented impersonation and imitation * Unconsented surveillance * Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions ### Intended Users #### Direct Users * General Public * Researchers * Students * Educators * Engineers/developers * Non-commercial entities * Community advocates, including human and civil rights groups #### Indirect Users * Users of derivatives created by Direct Users, such as those using software with an intended use * Users of Derivatives of the Model, as described in the License #### Others Affected (Parties Prenantes) * People and groups referred to by the LLM * People and groups exposed to outputs of, or decisions based on, the LLM * People and groups whose original work is included in the LLM Bias, Risks, and Limitations ---------------------------- *This section identifies foreseeable harms and misunderstandings.* Model may: * Overrepresent some viewpoints and underrepresent others * Contain stereotypes * Contain personal information * Generate: + Hateful, abusive, or violent language + Discriminatory or prejudicial language + Content that may not be appropriate for all settings, including sexual content * Make errors, including producing incorrect information as if it were factual * Generate irrelevant or repetitive outputs ### Recommendations *This section provides information on warnings and potential mitigations.* * Indirect users should be made aware when the content they're working with is created by the LLM. * Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. * Models pretrained with the LLM should include an updated Model Card. * Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. Training Data ------------- *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* Details for each dataset are provided in individual Data Cards. Training data includes: * 45 natural languages * 12 programming languages * In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.) #### Languages The pie chart shows the distribution of languages in training data. !pie chart showing the distribution of languages in training data The following table shows the further distribution of Niger-Congo and Indic languages in the training data. The following table shows the distribution of programming languages. Extension: java, Language: Java, Number of files: 5,407,724 Extension: php, Language: PHP, Number of files: 4,942,186 Extension: cpp, Language: C++, Number of files: 2,503,930 Extension: py, Language: Python, Number of files: 2,435,072 Extension: js, Language: JavaScript, Number of files: 1,905,518 Extension: cs, Language: C#, Number of files: 1,577,347 Extension: rb, Language: Ruby, Number of files: 6,78,413 Extension: cc, Language: C++, Number of files: 443,054 Extension: hpp, Language: C++, Number of files: 391,048 Extension: lua, Language: Lua, Number of files: 352,317 Extension: go, Language: GO, Number of files: 227,763 Extension: ts, Language: TypeScript, Number of files: 195,254 Extension: C, Language: C, Number of files: 134,537 Extension: scala, Language: Scala, Number of files: 92,052 Extension: hh, Language: C++, Number of files: 67,161 Extension: H, Language: C++, Number of files: 55,899 Extension: tsx, Language: TypeScript, Number of files: 33,107 Extension: rs, Language: Rust, Number of files: 29,693 Extension: phpt, Language: PHP, Number of files: 9,702 Extension: c++, Language: C++, Number of files: 1,342 Extension: h++, Language: C++, Number of files: 791 Extension: php3, Language: PHP, Number of files: 540 Extension: phps, Language: PHP, Number of files: 270 Extension: php5, Language: PHP, Number of files: 166 Extension: php4, Language: PHP, Number of files: 29 Evaluation ---------- *This section describes the evaluation protocols and provides the results.* ### Metrics *This section describes the different ways performance is calculated and why.* Includes: And multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)* ### Factors *This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* * Language, such as English or Yoruba * Domain, such as newswire or stories * Demographic characteristics, such as gender or nationality ### Results *Results are based on the Factors and Metrics.* Train-time Evaluation: As of 25.May.2022, 15:00 PST: * Training Loss: 2.0 * Validation Loss: 2.2 * Perplexity: 8.9 (More evaluation scores forthcoming at the end of model training.) * BLOOM Book: Read generations from BLOOM based on prompts provided by the community Environmental Impact -------------------- The training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. Estimated carbon emissions: *(Forthcoming upon completion of training.)* Estimated electricity usage: *(Forthcoming upon completion of training.)* Technical Specifications ------------------------ *This section provides information for people who work on model development.* Please see the BLOOM training README for full details on replicating training. Model Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code): * Decoder-only architecture * Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper) * ALiBI positional encodings (see paper), with GeLU activation functions * 1,722,408,960 parameters: + 513,802,240 embedding parameters + 24 layers, 16 attention heads + Hidden layers are 2048-dimensional + Sequence length of 2048 tokens used (see BLOOM tokenizer, tokenizer description) Objective Function: Cross Entropy with mean reduction (see API documentation). Compute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement). * Hardware: 64 V100 16/32GB GPUs (16 nodes): + 4 GPUs per node + 40 CPUs per task + 1 task per node + CPU: AMD + CPU memory: 160GB per node + GPU memory: 64GB or 128GB (depending on node availability during training) per node + Inter-node connect: Omni-Path Architecture (OPA) + NCCL-communications network: a fully dedicated subnet + Disc IO network: shared network with other types of nodes * Software: + Megatron-DeepSpeed (Github link) + DeepSpeed (Github link) + PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link) + apex (Github link) ### Training * Checkpoint size: + Fp16 weights: 2.6GB (# params \* 2) + Full checkpoint with optimizer states: -- * Training throughput: -- * Number of epochs: 1 * Dates: + Start: 11th March, 2022 11:42am PST + End: 20 May, 2022 * Server training location: Île-de-France, France ### Tokenization The BLOOM tokenizer (link) is a learned subword tokenizer trained using: * A byte-level Byte Pair Encoding (BPE) algorithm * A simple pre-tokenization rule, no normalization * A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. Cite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022 Glossary and Calculations ------------------------- *This section defines common terms and how metrics are calculated.* * Loss: A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. * Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. * High-stakes settings: Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed Artificial Intelligence (AI) Act. * Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act. * Human rights: Includes those rights defined in the Universal Declaration of Human Rights. * Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as "personal data" in the European Union's General Data Protection Regulation; and "personal information" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law. * Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1) * Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. More Information ---------------- ### Dataset Creation Blog post detailing the design choices during the dataset creation: URL ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL More details on the architecture/optimizer: URL Blog post on the hardware/engineering side: URL Details on the distributed setup used for the training: URL Tensorboard updated during the training: URL Insights on how to approach training, negative results: URL Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL ### Initial Results Initial prompting experiments using interim checkpoints: URL Model Card Authors ------------------ *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff Model Card Contact ------------------ Send Questions to: bigscience-contact@URL
[ "### Model Card\n\n\n![](URL alt=)\nVersion 1.0 / 26.May.2022\n\n\nModel Card for Bloom-1b7\n========================\n\n\nTable of Contents\n-----------------\n\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Recommendations\n5. Training Data\n6. Evaluation\n7. Environmental Impact\n8. Technical Specifications\n9. Citation\n10. Glossary and Calculations\n11. More Information\n12. Model Card Authors\n13. Model Card Contact\n\n\nModel Details\n-------------", "### Model Description\n\n\n*This section provides information for anyone who wants to know about the model.*\n\n\n* Developed by: BigScience (website)\n\n\n\t+ All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*\n* Model Type: Transformer-based Language Model\n* Version: 1.0.0\n* Languages: Multiple; see training data\n* License: RAIL License v1.0 (link)\n* Release Date Estimate: Monday, 11.July.2022\n* Funded by:\n\n\n\t+ The French government.\n\t+ Hugging Face (website).\n\t+ Organizations of contributors. *(Further breakdown of organizations forthcoming.)*\n\n\nUses\n----\n\n\n*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.\nIt provides information for anyone considering using the model or who is affected by the model.*", "### Intended Use\n\n\nThis model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.", "#### Direct Use\n\n\n* Text generation\n* Exploring characteristics of language generated by a language model\n\n\n\t+ Examples: Cloze tests, counterfactuals, generations with reframings", "#### Downstream Use\n\n\n* Tasks that leverage language models include: Information Extraction, Question Answering, Summarization", "### Misuse and Out-of-scope Use\n\n\n*This section addresses what users ought not do with the model.*\n\n\nSee the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.", "#### Out-of-scope Uses\n\n\nUsing the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.", "##### Out-of-scope Uses Include:\n\n\n* Usage in biomedical domains, political and legal domains, or finance domains\n* Usage for evaluating or scoring individuals, such as for employment, education, or credit\n* Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct", "#### Misuse\n\n\nIntentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:\n\n\n* Spam generation\n* Disinformation and influence operations\n* Disparagement and defamation\n* Harassment and abuse\n* Deception\n* Unconsented impersonation and imitation\n* Unconsented surveillance\n* Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions", "### Intended Users", "#### Direct Users\n\n\n* General Public\n* Researchers\n* Students\n* Educators\n* Engineers/developers\n* Non-commercial entities\n* Community advocates, including human and civil rights groups", "#### Indirect Users\n\n\n* Users of derivatives created by Direct Users, such as those using software with an intended use\n* Users of Derivatives of the Model, as described in the License", "#### Others Affected (Parties Prenantes)\n\n\n* People and groups referred to by the LLM\n* People and groups exposed to outputs of, or decisions based on, the LLM\n* People and groups whose original work is included in the LLM\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\n*This section identifies foreseeable harms and misunderstandings.*\n\n\nModel may:\n\n\n* Overrepresent some viewpoints and underrepresent others\n* Contain stereotypes\n* Contain personal information\n* Generate:\n\n\n\t+ Hateful, abusive, or violent language\n\t+ Discriminatory or prejudicial language\n\t+ Content that may not be appropriate for all settings, including sexual content\n* Make errors, including producing incorrect information as if it were factual\n* Generate irrelevant or repetitive outputs", "### Recommendations\n\n\n*This section provides information on warnings and potential mitigations.*\n\n\n* Indirect users should be made aware when the content they're working with is created by the LLM.\n* Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.\n* Models pretrained with the LLM should include an updated Model Card.\n* Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.\n\n\nTraining Data\n-------------\n\n\n*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*\n\n\nDetails for each dataset are provided in individual Data Cards.\n\n\nTraining data includes:\n\n\n* 45 natural languages\n* 12 programming languages\n* In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)", "#### Languages\n\n\nThe pie chart shows the distribution of languages in training data.\n\n\n!pie chart showing the distribution of languages in training data\n\n\nThe following table shows the further distribution of Niger-Congo and Indic languages in the training data.\n\n\n\n\nThe following table shows the distribution of programming languages.\n\n\nExtension: java, Language: Java, Number of files: 5,407,724\nExtension: php, Language: PHP, Number of files: 4,942,186\nExtension: cpp, Language: C++, Number of files: 2,503,930\nExtension: py, Language: Python, Number of files: 2,435,072\nExtension: js, Language: JavaScript, Number of files: 1,905,518\nExtension: cs, Language: C#, Number of files: 1,577,347\nExtension: rb, Language: Ruby, Number of files: 6,78,413\nExtension: cc, Language: C++, Number of files: 443,054\nExtension: hpp, Language: C++, Number of files: 391,048\nExtension: lua, Language: Lua, Number of files: 352,317\nExtension: go, Language: GO, Number of files: 227,763\nExtension: ts, Language: TypeScript, Number of files: 195,254\nExtension: C, Language: C, Number of files: 134,537\nExtension: scala, Language: Scala, Number of files: 92,052\nExtension: hh, Language: C++, Number of files: 67,161\nExtension: H, Language: C++, Number of files: 55,899\nExtension: tsx, Language: TypeScript, Number of files: 33,107\nExtension: rs, Language: Rust, Number of files: 29,693\nExtension: phpt, Language: PHP, Number of files: 9,702\nExtension: c++, Language: C++, Number of files: 1,342\nExtension: h++, Language: C++, Number of files: 791\nExtension: php3, Language: PHP, Number of files: 540\nExtension: phps, Language: PHP, Number of files: 270\nExtension: php5, Language: PHP, Number of files: 166\nExtension: php4, Language: PHP, Number of files: 29\n\n\nEvaluation\n----------\n\n\n*This section describes the evaluation protocols and provides the results.*", "### Metrics\n\n\n*This section describes the different ways performance is calculated and why.*\n\n\nIncludes:\n\n\n\nAnd multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)*", "### Factors\n\n\n*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*\n\n\n* Language, such as English or Yoruba\n* Domain, such as newswire or stories\n* Demographic characteristics, such as gender or nationality", "### Results\n\n\n*Results are based on the Factors and Metrics.*\n\n\nTrain-time Evaluation:\n\n\nAs of 25.May.2022, 15:00 PST:\n\n\n* Training Loss: 2.0\n* Validation Loss: 2.2\n* Perplexity: 8.9\n\n\n(More evaluation scores forthcoming at the end of model training.)\n\n\n* BLOOM Book: Read generations from BLOOM based on prompts provided by the community\n\n\nEnvironmental Impact\n--------------------\n\n\nThe training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.\n\n\nEstimated carbon emissions: *(Forthcoming upon completion of training.)*\n\n\nEstimated electricity usage: *(Forthcoming upon completion of training.)*\n\n\nTechnical Specifications\n------------------------\n\n\n*This section provides information for people who work on model development.*\n\n\nPlease see the BLOOM training README for full details on replicating training.\n\n\nModel Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code):\n\n\n* Decoder-only architecture\n* Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper)\n* ALiBI positional encodings (see paper), with GeLU activation functions\n* 1,722,408,960 parameters:\n\n\n\t+ 513,802,240 embedding parameters\n\t+ 24 layers, 16 attention heads\n\t+ Hidden layers are 2048-dimensional\n\t+ Sequence length of 2048 tokens used (see BLOOM tokenizer, tokenizer description)\n\n\nObjective Function: Cross Entropy with mean reduction (see API documentation).\n\n\nCompute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement).\n\n\n* Hardware: 64 V100 16/32GB GPUs (16 nodes):\n\n\n\t+ 4 GPUs per node\n\t+ 40 CPUs per task\n\t+ 1 task per node\n\t+ CPU: AMD\n\t+ CPU memory: 160GB per node\n\t+ GPU memory: 64GB or 128GB (depending on node availability during training) per node\n\t+ Inter-node connect: Omni-Path Architecture (OPA)\n\t+ NCCL-communications network: a fully dedicated subnet\n\t+ Disc IO network: shared network with other types of nodes\n* Software:\n\n\n\t+ Megatron-DeepSpeed (Github link)\n\t+ DeepSpeed (Github link)\n\t+ PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link)\n\t+ apex (Github link)", "### Training\n\n\n* Checkpoint size:\n\n\n\t+ Fp16 weights: 2.6GB (# params \\* 2)\n\t+ Full checkpoint with optimizer states: --\n* Training throughput: --\n* Number of epochs: 1\n* Dates:\n\n\n\t+ Start: 11th March, 2022 11:42am PST\n\t+ End: 20 May, 2022\n* Server training location: Île-de-France, France", "### Tokenization\n\n\nThe BLOOM tokenizer (link) is a learned subword tokenizer trained using:\n\n\n* A byte-level Byte Pair Encoding (BPE) algorithm\n* A simple pre-tokenization rule, no normalization\n* A vocabulary size of 250,680\n\n\nIt was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.\n\n\nCite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022\n\n\nGlossary and Calculations\n-------------------------\n\n\n*This section defines common terms and how metrics are calculated.*\n\n\n* Loss: A calculation of the difference between what the model has learned and what the data shows (\"groundtruth\"). The lower the loss, the better. The training process aims to minimize the loss.\n* Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.\n* High-stakes settings: Such as those identified as \"high-risk AI systems\" and \"unacceptable risk AI systems\" in the European Union's proposed Artificial Intelligence (AI) Act.\n* Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act.\n* Human rights: Includes those rights defined in the Universal Declaration of Human Rights.\n* Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as \"personal data\" in the European Union's General Data Protection Regulation; and \"personal information\" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law.\n* Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1)\n* Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.\n\n\nMore Information\n----------------", "### Dataset Creation\n\n\nBlog post detailing the design choices during the dataset creation: URL", "### Technical Specifications\n\n\nBlog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL\n\n\nMore details on the architecture/optimizer: URL\n\n\nBlog post on the hardware/engineering side: URL\n\n\nDetails on the distributed setup used for the training: URL\n\n\nTensorboard updated during the training: URL\n\n\nInsights on how to approach training, negative results: URL\n\n\nDetails on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL", "### Initial Results\n\n\nInitial prompting experiments using interim checkpoints: URL\n\n\nModel Card Authors\n------------------\n\n\n*Ordered roughly chronologically and by amount of time spent.*\n\n\nMargaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff\n\n\nModel Card Contact\n------------------\n\n\nSend Questions to: bigscience-contact@URL" ]
[ "TAGS\n#transformers #safetensors #bloom #text-generation #arxiv-1909.08053 #arxiv-2110.02861 #arxiv-2108.12409 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "### Model Card\n\n\n![](URL alt=)\nVersion 1.0 / 26.May.2022\n\n\nModel Card for Bloom-1b7\n========================\n\n\nTable of Contents\n-----------------\n\n\n1. Model Details\n2. Uses\n3. Bias, Risks, and Limitations\n4. Recommendations\n5. Training Data\n6. Evaluation\n7. Environmental Impact\n8. Technical Specifications\n9. Citation\n10. Glossary and Calculations\n11. More Information\n12. Model Card Authors\n13. Model Card Contact\n\n\nModel Details\n-------------", "### Model Description\n\n\n*This section provides information for anyone who wants to know about the model.*\n\n\n* Developed by: BigScience (website)\n\n\n\t+ All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*\n* Model Type: Transformer-based Language Model\n* Version: 1.0.0\n* Languages: Multiple; see training data\n* License: RAIL License v1.0 (link)\n* Release Date Estimate: Monday, 11.July.2022\n* Funded by:\n\n\n\t+ The French government.\n\t+ Hugging Face (website).\n\t+ Organizations of contributors. *(Further breakdown of organizations forthcoming.)*\n\n\nUses\n----\n\n\n*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.\nIt provides information for anyone considering using the model or who is affected by the model.*", "### Intended Use\n\n\nThis model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.", "#### Direct Use\n\n\n* Text generation\n* Exploring characteristics of language generated by a language model\n\n\n\t+ Examples: Cloze tests, counterfactuals, generations with reframings", "#### Downstream Use\n\n\n* Tasks that leverage language models include: Information Extraction, Question Answering, Summarization", "### Misuse and Out-of-scope Use\n\n\n*This section addresses what users ought not do with the model.*\n\n\nSee the BLOOM License, Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.", "#### Out-of-scope Uses\n\n\nUsing the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.", "##### Out-of-scope Uses Include:\n\n\n* Usage in biomedical domains, political and legal domains, or finance domains\n* Usage for evaluating or scoring individuals, such as for employment, education, or credit\n* Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct", "#### Misuse\n\n\nIntentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:\n\n\n* Spam generation\n* Disinformation and influence operations\n* Disparagement and defamation\n* Harassment and abuse\n* Deception\n* Unconsented impersonation and imitation\n* Unconsented surveillance\n* Generating content without attribution to the model, as specified in the RAIL License, Use Restrictions", "### Intended Users", "#### Direct Users\n\n\n* General Public\n* Researchers\n* Students\n* Educators\n* Engineers/developers\n* Non-commercial entities\n* Community advocates, including human and civil rights groups", "#### Indirect Users\n\n\n* Users of derivatives created by Direct Users, such as those using software with an intended use\n* Users of Derivatives of the Model, as described in the License", "#### Others Affected (Parties Prenantes)\n\n\n* People and groups referred to by the LLM\n* People and groups exposed to outputs of, or decisions based on, the LLM\n* People and groups whose original work is included in the LLM\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\n*This section identifies foreseeable harms and misunderstandings.*\n\n\nModel may:\n\n\n* Overrepresent some viewpoints and underrepresent others\n* Contain stereotypes\n* Contain personal information\n* Generate:\n\n\n\t+ Hateful, abusive, or violent language\n\t+ Discriminatory or prejudicial language\n\t+ Content that may not be appropriate for all settings, including sexual content\n* Make errors, including producing incorrect information as if it were factual\n* Generate irrelevant or repetitive outputs", "### Recommendations\n\n\n*This section provides information on warnings and potential mitigations.*\n\n\n* Indirect users should be made aware when the content they're working with is created by the LLM.\n* Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.\n* Models pretrained with the LLM should include an updated Model Card.\n* Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.\n\n\nTraining Data\n-------------\n\n\n*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*\n\n\nDetails for each dataset are provided in individual Data Cards.\n\n\nTraining data includes:\n\n\n* 45 natural languages\n* 12 programming languages\n* In 1.5TB of pre-processed text, converted into 350B unique tokens (see the tokenizer section for more.)", "#### Languages\n\n\nThe pie chart shows the distribution of languages in training data.\n\n\n!pie chart showing the distribution of languages in training data\n\n\nThe following table shows the further distribution of Niger-Congo and Indic languages in the training data.\n\n\n\n\nThe following table shows the distribution of programming languages.\n\n\nExtension: java, Language: Java, Number of files: 5,407,724\nExtension: php, Language: PHP, Number of files: 4,942,186\nExtension: cpp, Language: C++, Number of files: 2,503,930\nExtension: py, Language: Python, Number of files: 2,435,072\nExtension: js, Language: JavaScript, Number of files: 1,905,518\nExtension: cs, Language: C#, Number of files: 1,577,347\nExtension: rb, Language: Ruby, Number of files: 6,78,413\nExtension: cc, Language: C++, Number of files: 443,054\nExtension: hpp, Language: C++, Number of files: 391,048\nExtension: lua, Language: Lua, Number of files: 352,317\nExtension: go, Language: GO, Number of files: 227,763\nExtension: ts, Language: TypeScript, Number of files: 195,254\nExtension: C, Language: C, Number of files: 134,537\nExtension: scala, Language: Scala, Number of files: 92,052\nExtension: hh, Language: C++, Number of files: 67,161\nExtension: H, Language: C++, Number of files: 55,899\nExtension: tsx, Language: TypeScript, Number of files: 33,107\nExtension: rs, Language: Rust, Number of files: 29,693\nExtension: phpt, Language: PHP, Number of files: 9,702\nExtension: c++, Language: C++, Number of files: 1,342\nExtension: h++, Language: C++, Number of files: 791\nExtension: php3, Language: PHP, Number of files: 540\nExtension: phps, Language: PHP, Number of files: 270\nExtension: php5, Language: PHP, Number of files: 166\nExtension: php4, Language: PHP, Number of files: 29\n\n\nEvaluation\n----------\n\n\n*This section describes the evaluation protocols and provides the results.*", "### Metrics\n\n\n*This section describes the different ways performance is calculated and why.*\n\n\nIncludes:\n\n\n\nAnd multiple different metrics for specific tasks. *(More evaluation metrics forthcoming upon completion of evaluation protocol.)*", "### Factors\n\n\n*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*\n\n\n* Language, such as English or Yoruba\n* Domain, such as newswire or stories\n* Demographic characteristics, such as gender or nationality", "### Results\n\n\n*Results are based on the Factors and Metrics.*\n\n\nTrain-time Evaluation:\n\n\nAs of 25.May.2022, 15:00 PST:\n\n\n* Training Loss: 2.0\n* Validation Loss: 2.2\n* Perplexity: 8.9\n\n\n(More evaluation scores forthcoming at the end of model training.)\n\n\n* BLOOM Book: Read generations from BLOOM based on prompts provided by the community\n\n\nEnvironmental Impact\n--------------------\n\n\nThe training supercomputer, Jean Zay (website), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.\n\n\nEstimated carbon emissions: *(Forthcoming upon completion of training.)*\n\n\nEstimated electricity usage: *(Forthcoming upon completion of training.)*\n\n\nTechnical Specifications\n------------------------\n\n\n*This section provides information for people who work on model development.*\n\n\nPlease see the BLOOM training README for full details on replicating training.\n\n\nModel Architecture: Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code):\n\n\n* Decoder-only architecture\n* Layer normalization applied to word embeddings layer ('StableEmbedding'; see code, paper)\n* ALiBI positional encodings (see paper), with GeLU activation functions\n* 1,722,408,960 parameters:\n\n\n\t+ 513,802,240 embedding parameters\n\t+ 24 layers, 16 attention heads\n\t+ Hidden layers are 2048-dimensional\n\t+ Sequence length of 2048 tokens used (see BLOOM tokenizer, tokenizer description)\n\n\nObjective Function: Cross Entropy with mean reduction (see API documentation).\n\n\nCompute infrastructure: Jean Zay Public Supercomputer, provided by the French government (see announcement).\n\n\n* Hardware: 64 V100 16/32GB GPUs (16 nodes):\n\n\n\t+ 4 GPUs per node\n\t+ 40 CPUs per task\n\t+ 1 task per node\n\t+ CPU: AMD\n\t+ CPU memory: 160GB per node\n\t+ GPU memory: 64GB or 128GB (depending on node availability during training) per node\n\t+ Inter-node connect: Omni-Path Architecture (OPA)\n\t+ NCCL-communications network: a fully dedicated subnet\n\t+ Disc IO network: shared network with other types of nodes\n* Software:\n\n\n\t+ Megatron-DeepSpeed (Github link)\n\t+ DeepSpeed (Github link)\n\t+ PyTorch (pytorch-1.11 w/ CUDA-11.5; see Github link)\n\t+ apex (Github link)", "### Training\n\n\n* Checkpoint size:\n\n\n\t+ Fp16 weights: 2.6GB (# params \\* 2)\n\t+ Full checkpoint with optimizer states: --\n* Training throughput: --\n* Number of epochs: 1\n* Dates:\n\n\n\t+ Start: 11th March, 2022 11:42am PST\n\t+ End: 20 May, 2022\n* Server training location: Île-de-France, France", "### Tokenization\n\n\nThe BLOOM tokenizer (link) is a learned subword tokenizer trained using:\n\n\n* A byte-level Byte Pair Encoding (BPE) algorithm\n* A simple pre-tokenization rule, no normalization\n* A vocabulary size of 250,680\n\n\nIt was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.\n\n\nCite as: BigScience, *BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model*. International, May 2021-May 2022\n\n\nGlossary and Calculations\n-------------------------\n\n\n*This section defines common terms and how metrics are calculated.*\n\n\n* Loss: A calculation of the difference between what the model has learned and what the data shows (\"groundtruth\"). The lower the loss, the better. The training process aims to minimize the loss.\n* Perplexity: This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.\n* High-stakes settings: Such as those identified as \"high-risk AI systems\" and \"unacceptable risk AI systems\" in the European Union's proposed Artificial Intelligence (AI) Act.\n* Critical decisions: Such as those defined in the United States' proposed Algorithmic Accountability Act.\n* Human rights: Includes those rights defined in the Universal Declaration of Human Rights.\n* Personal Data and Personal Information: Personal data and information is defined in multiple data protection regulations, such as \"personal data\" in the European Union's General Data Protection Regulation; and \"personal information\" in the Republic of South Africa's Protection of Personal Information Act, The People's Republic of China's Personal information protection law.\n* Sensitive characteristics: This includes specifically protected categories in human rights (see UHDR, Article 2) and personal information regulation (see GDPR, Article 9; Protection of Personal Information Act, Chapter 1)\n* Deception: Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.\n\n\nMore Information\n----------------", "### Dataset Creation\n\n\nBlog post detailing the design choices during the dataset creation: URL", "### Technical Specifications\n\n\nBlog post summarizing how the architecture, size, shape, and pre-training duration where selected: URL\n\n\nMore details on the architecture/optimizer: URL\n\n\nBlog post on the hardware/engineering side: URL\n\n\nDetails on the distributed setup used for the training: URL\n\n\nTensorboard updated during the training: URL\n\n\nInsights on how to approach training, negative results: URL\n\n\nDetails on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): URL", "### Initial Results\n\n\nInitial prompting experiments using interim checkpoints: URL\n\n\nModel Card Authors\n------------------\n\n\n*Ordered roughly chronologically and by amount of time spent.*\n\n\nMargaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff\n\n\nModel Card Contact\n------------------\n\n\nSend Questions to: bigscience-contact@URL" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_Instruction0_ASOPL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_ASOPL_v1", "results": []}]}
ThuyNT/CS505_COQE_viT5_train_Instruction0_ASOPL_v1
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T22:47:25+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# CS505_COQE_viT5_train_Instruction0_ASOPL_v1 This model is a fine-tuned version of VietAI/vit5-large on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# CS505_COQE_viT5_train_Instruction0_ASOPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# CS505_COQE_viT5_train_Instruction0_ASOPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]