pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/Fredithefish/MystixNoromaidx <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/MystixNoromaidx-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "base_model": "Fredithefish/MystixNoromaidx", "quantized_by": "mradermacher"}
mradermacher/MystixNoromaidx-i1-GGUF
null
[ "transformers", "gguf", "en", "base_model:Fredithefish/MystixNoromaidx", "endpoints_compatible", "region:us" ]
null
2024-04-13T00:11:33+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-Fredithefish/MystixNoromaidx #endpoints_compatible #region-us
About ----- weighted/imatrix quants of URL static quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-Fredithefish/MystixNoromaidx #endpoints_compatible #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Rimyy/GemmaFTv2Math
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T00:18:11+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
Satyach/mistral-gemma-recovery
null
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-13T00:25:43+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0001_idpo_same_6iters_iter_4 This model is a fine-tuned version of [ShenaoZ/0.0001_idpo_same_6iters_iter_3](https://huggingface.co/ShenaoZ/0.0001_idpo_same_6iters_iter_3) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.0001_idpo_same_6iters_iter_3", "model-index": [{"name": "0.0001_idpo_same_6iters_iter_4", "results": []}]}
ShenaoZ/0.0001_idpo_same_6iters_iter_4
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.0001_idpo_same_6iters_iter_3", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T00:27:11+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0001_idpo_same_6iters_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.0001_idpo_same_6iters_iter_4 This model is a fine-tuned version of ShenaoZ/0.0001_idpo_same_6iters_iter_3 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.0001_idpo_same_6iters_iter_4\n\nThis model is a fine-tuned version of ShenaoZ/0.0001_idpo_same_6iters_iter_3 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0001_idpo_same_6iters_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.0001_idpo_same_6iters_iter_4\n\nThis model is a fine-tuned version of ShenaoZ/0.0001_idpo_same_6iters_iter_3 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Pot-l/bert-ner-skills
null
[ "transformers", "safetensors", "bert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T00:27:46+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
![Ostrich-70B](https://primal.b-cdn.net/media-cache?s=o&a=1&u=https%3A%2F%2Fm.primal.net%2FHyFP.png) # Model Card for Ostrich - Contentious, judgemental, uncensored, can't agree with itself 32% of the time! - Trained a bit about nostr - Trained a bit about bitcoin - Trained a bit in the health domain I am having success with chat template: \<s\> \[INST\] ... \<\/s\> It may also work with ChatML format, though I see more repetitions when I use that. ## Model Details Based on https://huggingface.co/crestf411/daybreak-miqu-1-70b-v1.0-hf because it is one of the most uncensored according to https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard. - **Fine tuned by:** someone - **Finetuned from model:** https://huggingface.co/crestf411/daybreak-miqu-1-70b-v1.0-hf ## Uses Ask any question, compared to other models this may know more about Nostr and Bitcoin. You can use llama.cpp to chat with it. You can also use llama-cpp-python package to use it in a Python script. ## Warning Users (both direct and downstream) should be aware of the risks, biases and limitations of the model. The trainer, developer or uploader of this model does not assume any liability. Use it at your own risk. ## Training Details ### Training Data Nostr related info from web and nostr itself, bitcoin related info. ### Training Procedure LLaMa-Factory is used to train on 2x3090! fsdp_qlora is the technique. It took ~185 hours for a dataset of 122MB.
{"license": "apache-2.0"}
some1nostr/Ostrich-70B
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-04-13T00:27:54+00:00
[]
[]
TAGS #gguf #license-apache-2.0 #region-us
!Ostrich-70B # Model Card for Ostrich - Contentious, judgemental, uncensored, can't agree with itself 32% of the time! - Trained a bit about nostr - Trained a bit about bitcoin - Trained a bit in the health domain I am having success with chat template: \<s\> \[INST\] ... \<\/s\> It may also work with ChatML format, though I see more repetitions when I use that. ## Model Details Based on URL because it is one of the most uncensored according to URL - Fine tuned by: someone - Finetuned from model: URL ## Uses Ask any question, compared to other models this may know more about Nostr and Bitcoin. You can use URL to chat with it. You can also use llama-cpp-python package to use it in a Python script. ## Warning Users (both direct and downstream) should be aware of the risks, biases and limitations of the model. The trainer, developer or uploader of this model does not assume any liability. Use it at your own risk. ## Training Details ### Training Data Nostr related info from web and nostr itself, bitcoin related info. ### Training Procedure LLaMa-Factory is used to train on 2x3090! fsdp_qlora is the technique. It took ~185 hours for a dataset of 122MB.
[ "# Model Card for Ostrich\n\n\n- Contentious, judgemental, uncensored, can't agree with itself 32% of the time!\n- Trained a bit about nostr\n- Trained a bit about bitcoin\n- Trained a bit in the health domain\n\nI am having success with chat template: \\<s\\> \\[INST\\] ... \\<\\/s\\> \n\nIt may also work with ChatML format, though I see more repetitions when I use that.", "## Model Details\n\nBased on URL because it is one of the most uncensored according to URL\n\n\n- Fine tuned by: someone\n- Finetuned from model: URL", "## Uses\n\nAsk any question, compared to other models this may know more about Nostr and Bitcoin.\nYou can use URL to chat with it.\nYou can also use llama-cpp-python package to use it in a Python script.", "## Warning\n\nUsers (both direct and downstream) should be aware of the risks, biases and limitations of the model.\nThe trainer, developer or uploader of this model does not assume any liability. Use it at your own risk.", "## Training Details", "### Training Data\n\nNostr related info from web and nostr itself, bitcoin related info.", "### Training Procedure\n\nLLaMa-Factory is used to train on 2x3090! fsdp_qlora is the technique. \n\nIt took ~185 hours for a dataset of 122MB." ]
[ "TAGS\n#gguf #license-apache-2.0 #region-us \n", "# Model Card for Ostrich\n\n\n- Contentious, judgemental, uncensored, can't agree with itself 32% of the time!\n- Trained a bit about nostr\n- Trained a bit about bitcoin\n- Trained a bit in the health domain\n\nI am having success with chat template: \\<s\\> \\[INST\\] ... \\<\\/s\\> \n\nIt may also work with ChatML format, though I see more repetitions when I use that.", "## Model Details\n\nBased on URL because it is one of the most uncensored according to URL\n\n\n- Fine tuned by: someone\n- Finetuned from model: URL", "## Uses\n\nAsk any question, compared to other models this may know more about Nostr and Bitcoin.\nYou can use URL to chat with it.\nYou can also use llama-cpp-python package to use it in a Python script.", "## Warning\n\nUsers (both direct and downstream) should be aware of the risks, biases and limitations of the model.\nThe trainer, developer or uploader of this model does not assume any liability. Use it at your own risk.", "## Training Details", "### Training Data\n\nNostr related info from web and nostr itself, bitcoin related info.", "### Training Procedure\n\nLLaMa-Factory is used to train on 2x3090! fsdp_qlora is the technique. \n\nIt took ~185 hours for a dataset of 122MB." ]
text-generation
transformers
<img src="https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1/resolve/main/logo.png" alt="Zephyr 141B Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for Zephyr 141B-A35B Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 141B-A35B is the latest model in the series, and is a fine-tuned version of [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) that was trained using a novel alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691) with **7k instances** for **1.3 hours** on 4 nodes of 8 x H100s. ORPO does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO. To train Zephyr-141B-A35B, we used the [`argilla/distilabel-capybara-dpo-7k-binarized`](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized) preference dataset, which consists of synthetic, high-quality, multi-turn preferences that have been scored via LLMs. > [!NOTE] > This model was trained collaboratively between Argilla, KAIST, and Hugging Face ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Model type:** A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English. - **License:** Apache 2.0 - **Finetuned from model:** [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook - **Dataset:** https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized ## Performance Zephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [IFEval](https://arxiv.org/abs/2311.07911). The scores reported below were obtained using the [LightEval](https://github.com/huggingface/lighteval) evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. | Model | MT Bench | IFEval | BBH | AGIEval | |-----------------------------------------------------------------------------------------------------|---------:|-------:|------:|--------:| | [zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1) | 8.17 | 65.06 | 58.96 | 44.16 | | [databricks/dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct) | 8.26 | 52.13 | 48.50 | 41.16 | | [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8.30 | 55.08 | 45.31 | 47.68 | ## Intended uses & limitations The model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # pip install 'transformers>=4.39.3' # pip install accelerate import torch from transformers import pipeline pipe = pipeline( "text-generation", model="HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1", device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ { "role": "system", "content": "You are Zephyr, a helpful assistant.", }, {"role": "user", "content": "Explain how Mixture of Experts work in language a child would understand."}, ] outputs = pipe( messages, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, ) print(outputs[0]["generated_text"][-1]["content"]) ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Zephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`mistral-community/Mixtral-8x22B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - total_train_batch_size: 32 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 ## Citation If you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper: ``` @misc{hong2024orpo, title={ORPO: Monolithic Preference Optimization without Reference Model}, author={Jiwoo Hong and Noah Lee and James Thorne}, year={2024}, eprint={2403.07691}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` You may also wish to cite the creators of this model: ``` @misc{zephyr_141b, author = {Alvaro Bartolome and Jiwoo Hong and Noah Lee and Kashif Rasul and Lewis Tunstall}, title = {Zephyr 141B A35B}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1}} } ```
{"license": "apache-2.0", "tags": ["trl", "orpo", "generated_from_trainer"], "datasets": ["argilla/distilabel-capybara-dpo-7k-binarized"], "base_model": "mistral-community/Mixtral-8x22B-v0.1", "model-index": [{"name": "zephyr-orpo-141b-A35b-v0.1", "results": []}]}
blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw4.6
null
[ "transformers", "safetensors", "mixtral", "text-generation", "trl", "orpo", "generated_from_trainer", "conversational", "dataset:argilla/distilabel-capybara-dpo-7k-binarized", "arxiv:2403.07691", "arxiv:2311.07911", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T00:29:54+00:00
[ "2403.07691", "2311.07911" ]
[]
TAGS #transformers #safetensors #mixtral #text-generation #trl #orpo #generated_from_trainer #conversational #dataset-argilla/distilabel-capybara-dpo-7k-binarized #arxiv-2403.07691 #arxiv-2311.07911 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src="URL alt="Zephyr 141B Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Model Card for Zephyr 141B-A35B =============================== Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 141B-A35B is the latest model in the series, and is a fine-tuned version of mistral-community/Mixtral-8x22B-v0.1 that was trained using a novel alignment algorithm called Odds Ratio Preference Optimization (ORPO) with 7k instances for 1.3 hours on 4 nodes of 8 x H100s. ORPO does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO. To train Zephyr-141B-A35B, we used the 'argilla/distilabel-capybara-dpo-7k-binarized' preference dataset, which consists of synthetic, high-quality, multi-turn preferences that have been scored via LLMs. > > [!NOTE] > This model was trained collaboratively between Argilla, KAIST, and Hugging Face > > > Model Details ------------- ### Model Description * Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets. * Language(s) (NLP): Primarily English. * License: Apache 2.0 * Finetuned from model: mistral-community/Mixtral-8x22B-v0.1 ### Model Sources * Repository: URL * Dataset: URL Performance ----------- Zephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. Intended uses & limitations --------------------------- The model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers: Bias, Risks, and Limitations ---------------------------- Zephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this. Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-06 * train\_batch\_size: 1 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 32 * total\_train\_batch\_size: 32 * total\_eval\_batch\_size: 256 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: inverse\_sqrt * lr\_scheduler\_warmup\_steps: 100 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.1 If you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper: You may also wish to cite the creators of this model:
[ "### Model Description\n\n\n* Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English.\n* License: Apache 2.0\n* Finetuned from model: mistral-community/Mixtral-8x22B-v0.1", "### Model Sources\n\n\n* Repository: URL\n* Dataset: URL\n\n\nPerformance\n-----------\n\n\nZephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nIt is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 32\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1\n\n\nIf you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:\n\n\nYou may also wish to cite the creators of this model:" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #trl #orpo #generated_from_trainer #conversational #dataset-argilla/distilabel-capybara-dpo-7k-binarized #arxiv-2403.07691 #arxiv-2311.07911 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Model Description\n\n\n* Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English.\n* License: Apache 2.0\n* Finetuned from model: mistral-community/Mixtral-8x22B-v0.1", "### Model Sources\n\n\n* Repository: URL\n* Dataset: URL\n\n\nPerformance\n-----------\n\n\nZephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nIt is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 32\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1\n\n\nIf you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:\n\n\nYou may also wish to cite the creators of this model:" ]
text-to-image
diffusers
# luminaxl <Gallery /> ## Model description Um modelo de geração de imagens que usa Safetensors de vários outros para mesclar e criar o melhor resultado. ## Trigger words You should use `lumina` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/synthetica/luminaxl/tree/main) them in the Files & versions tab.
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "A flock of paper airplanes flutters through a dense jungle, weaving around trees as if they were migrating birds.", "output": {"url": "images/A flock of paper airplanes flutters through a dense jungle, weaving around trees as if they were migrating birds..png"}}, {"text": "Black silhouette of a person standing with his back to a white background, clean shadow style, minimalist art.", "output": {"url": "images/Black silhouette of a person standing with his back to a white background, clean shadow style, minimalist art..png"}}], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "lumina"}
synthetica/luminaxl
null
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
null
2024-04-13T00:32:35+00:00
[]
[]
TAGS #diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us
# luminaxl <Gallery /> ## Model description Um modelo de geração de imagens que usa Safetensors de vários outros para mesclar e criar o melhor resultado. ## Trigger words You should use 'lumina' to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
[ "# luminaxl\n\n<Gallery />", "## Model description \n\nUm modelo de geração de imagens que usa Safetensors de vários outros para mesclar e criar o melhor resultado.", "## Trigger words\n\nYou should use 'lumina' to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
[ "TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us \n", "# luminaxl\n\n<Gallery />", "## Model description \n\nUm modelo de geração de imagens que usa Safetensors de vários outros para mesclar e criar o melhor resultado.", "## Trigger words\n\nYou should use 'lumina' to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # leagaleasy-mistral-7b-instruct-v0.2-v1 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "leagaleasy-mistral-7b-instruct-v0.2-v1", "results": []}]}
asahikuroki222/leagaleasy-mistral-7b-instruct-v0.2-v1
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-13T00:34:33+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
# leagaleasy-mistral-7b-instruct-v0.2-v1 This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# leagaleasy-mistral-7b-instruct-v0.2-v1\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 4\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "# leagaleasy-mistral-7b-instruct-v0.2-v1\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 4\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-lora-food101 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 9 | 1.3100 | 0.862 | ### Framework versions - PEFT 0.5.0 - Transformers 4.36.0 - Pytorch 2.0.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "vit-base-patch16-224-in21k-finetuned-lora-food101", "results": []}]}
Raja1234/vit-base-patch16-224-in21k-finetuned-lora-food101
null
[ "peft", "safetensors", "vit", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "region:us" ]
null
2024-04-13T00:35:35+00:00
[]
[]
TAGS #peft #safetensors #vit #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #region-us
vit-base-patch16-224-in21k-finetuned-lora-food101 ================================================= This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset. Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.005 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 512 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.5.0 * Transformers 4.36.0 * Pytorch 2.0.1 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 512\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.5.0\n* Transformers 4.36.0\n* Pytorch 2.0.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #vit #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 512\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.5.0\n* Transformers 4.36.0\n* Pytorch 2.0.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-to-image
diffusers
Model info: https://civitai.com/models/65849?modelVersionId=76306
{"license": "other", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers"], "inference": true}
digiplay/SomethingPhenomenal_vivacityV2
null
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-13T00:38:35+00:00
[]
[]
TAGS #diffusers #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #license-other #endpoints_compatible #has_space #diffusers-StableDiffusionPipeline #region-us
Model info: URL
[]
[ "TAGS\n#diffusers #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #license-other #endpoints_compatible #has_space #diffusers-StableDiffusionPipeline #region-us \n" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0001_idpo_same_3iters_iter_2 This model is a fine-tuned version of [ShenaoZ/0.0001_idpo_same_3iters_iter_1](https://huggingface.co/ShenaoZ/0.0001_idpo_same_3iters_iter_1) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.0001_idpo_same_3iters_iter_1", "model-index": [{"name": "0.0001_idpo_same_3iters_iter_2", "results": []}]}
ShenaoZ/0.0001_idpo_same_3iters_iter_2
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.0001_idpo_same_3iters_iter_1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T00:39:03+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0001_idpo_same_3iters_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.0001_idpo_same_3iters_iter_2 This model is a fine-tuned version of ShenaoZ/0.0001_idpo_same_3iters_iter_1 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.0001_idpo_same_3iters_iter_2\n\nThis model is a fine-tuned version of ShenaoZ/0.0001_idpo_same_3iters_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 128\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0001_idpo_same_3iters_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.0001_idpo_same_3iters_iter_2\n\nThis model is a fine-tuned version of ShenaoZ/0.0001_idpo_same_3iters_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 128\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shorok31/BLIP_LORA_2
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T00:39:35+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
{"library_name": "peft", "base_model": "deepseek-ai/deepseek-coder-1.3b-instruct"}
CMU-AIR2/math-deepseek-lora-hard-arith
null
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:deepseek-ai/deepseek-coder-1.3b-instruct", "region:us" ]
null
2024-04-13T00:48:46+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #llama #arxiv-1910.09700 #base_model-deepseek-ai/deepseek-coder-1.3b-instruct #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.8.2
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.8.2" ]
[ "TAGS\n#peft #safetensors #llama #arxiv-1910.09700 #base_model-deepseek-ai/deepseek-coder-1.3b-instruct #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.8.2" ]
summarization
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-log-sage This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the hdfs_log_summary_dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.5181 - Rouge1: 0.4709 - Rouge2: 0.1615 - Rougel: 0.3748 - Rougelsum: 0.3905 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 12 | 2.9597 | 0.1985 | 0.0098 | 0.1629 | 0.1658 | 18.8 | | No log | 2.0 | 24 | 2.5389 | 0.3028 | 0.0271 | 0.2401 | 0.2492 | 17.8 | | No log | 3.0 | 36 | 2.2506 | 0.3349 | 0.0688 | 0.2549 | 0.2789 | 19.0 | | No log | 4.0 | 48 | 2.0524 | 0.4046 | 0.0982 | 0.3249 | 0.3409 | 19.0 | | No log | 5.0 | 60 | 1.9082 | 0.4479 | 0.1438 | 0.3449 | 0.3617 | 19.0 | | No log | 6.0 | 72 | 1.8325 | 0.4564 | 0.1577 | 0.3402 | 0.3562 | 18.8 | | No log | 7.0 | 84 | 1.7565 | 0.4441 | 0.1456 | 0.3335 | 0.351 | 19.0 | | No log | 8.0 | 96 | 1.7091 | 0.4691 | 0.1732 | 0.3486 | 0.3667 | 19.0 | | No log | 9.0 | 108 | 1.6683 | 0.4847 | 0.1645 | 0.3589 | 0.3667 | 19.0 | | No log | 10.0 | 120 | 1.5987 | 0.4847 | 0.1727 | 0.3667 | 0.3667 | 19.0 | | No log | 11.0 | 132 | 1.5606 | 0.4684 | 0.1935 | 0.3746 | 0.3751 | 19.0 | | No log | 12.0 | 144 | 1.5245 | 0.4749 | 0.193 | 0.3817 | 0.3894 | 19.0 | | No log | 13.0 | 156 | 1.4859 | 0.5163 | 0.2289 | 0.3802 | 0.3879 | 19.0 | | No log | 14.0 | 168 | 1.4950 | 0.4404 | 0.1522 | 0.3474 | 0.3474 | 19.0 | | No log | 15.0 | 180 | 1.4552 | 0.4609 | 0.1865 | 0.3573 | 0.362 | 19.0 | | No log | 16.0 | 192 | 1.4501 | 0.4521 | 0.1685 | 0.342 | 0.3423 | 19.0 | | No log | 17.0 | 204 | 1.3955 | 0.4763 | 0.1769 | 0.3788 | 0.379 | 19.0 | | No log | 18.0 | 216 | 1.4192 | 0.4602 | 0.199 | 0.3168 | 0.3178 | 19.0 | | No log | 19.0 | 228 | 1.3750 | 0.411 | 0.1258 | 0.3168 | 0.3269 | 19.0 | | No log | 20.0 | 240 | 1.3660 | 0.5038 | 0.2293 | 0.3638 | 0.3649 | 19.0 | | No log | 21.0 | 252 | 1.3610 | 0.4508 | 0.1364 | 0.3319 | 0.3397 | 19.0 | | No log | 22.0 | 264 | 1.3437 | 0.4495 | 0.1225 | 0.3217 | 0.3239 | 19.0 | | No log | 23.0 | 276 | 1.3394 | 0.4495 | 0.1225 | 0.3217 | 0.3239 | 19.0 | | No log | 24.0 | 288 | 1.3716 | 0.4499 | 0.1459 | 0.3562 | 0.3727 | 19.0 | | No log | 25.0 | 300 | 1.3673 | 0.4427 | 0.1585 | 0.3704 | 0.3784 | 19.0 | | No log | 26.0 | 312 | 1.3225 | 0.4427 | 0.1585 | 0.3704 | 0.3784 | 19.0 | | No log | 27.0 | 324 | 1.3041 | 0.4308 | 0.1457 | 0.3426 | 0.352 | 19.0 | | No log | 28.0 | 336 | 1.3350 | 0.4508 | 0.1459 | 0.3562 | 0.3647 | 19.0 | | No log | 29.0 | 348 | 1.3438 | 0.4243 | 0.1256 | 0.3364 | 0.3439 | 19.0 | | No log | 30.0 | 360 | 1.3332 | 0.4302 | 0.1262 | 0.3394 | 0.3474 | 19.0 | | No log | 31.0 | 372 | 1.3551 | 0.4647 | 0.1385 | 0.3595 | 0.3595 | 19.0 | | No log | 32.0 | 384 | 1.3822 | 0.4647 | 0.1385 | 0.3595 | 0.3595 | 19.0 | | No log | 33.0 | 396 | 1.3978 | 0.4647 | 0.1385 | 0.3595 | 0.3595 | 19.0 | | No log | 34.0 | 408 | 1.4044 | 0.4469 | 0.1331 | 0.3518 | 0.3518 | 19.0 | | No log | 35.0 | 420 | 1.3828 | 0.4614 | 0.1369 | 0.357 | 0.3727 | 19.0 | | No log | 36.0 | 432 | 1.3797 | 0.4551 | 0.1369 | 0.357 | 0.3727 | 19.0 | | No log | 37.0 | 444 | 1.3528 | 0.4493 | 0.124 | 0.3515 | 0.3669 | 19.0 | | No log | 38.0 | 456 | 1.3716 | 0.4493 | 0.124 | 0.3515 | 0.3669 | 19.0 | | No log | 39.0 | 468 | 1.4217 | 0.4429 | 0.124 | 0.3449 | 0.3606 | 19.0 | | No log | 40.0 | 480 | 1.4128 | 0.4429 | 0.124 | 0.3449 | 0.3606 | 19.0 | | No log | 41.0 | 492 | 1.3495 | 0.4429 | 0.124 | 0.3449 | 0.3606 | 19.0 | | 1.33 | 42.0 | 504 | 1.3608 | 0.4397 | 0.1117 | 0.348 | 0.3636 | 19.0 | | 1.33 | 43.0 | 516 | 1.4052 | 0.4605 | 0.1246 | 0.3688 | 0.3845 | 19.0 | | 1.33 | 44.0 | 528 | 1.3969 | 0.4605 | 0.1435 | 0.3688 | 0.3845 | 19.0 | | 1.33 | 45.0 | 540 | 1.3768 | 0.4551 | 0.1369 | 0.357 | 0.3727 | 19.0 | | 1.33 | 46.0 | 552 | 1.3903 | 0.4429 | 0.124 | 0.3449 | 0.3606 | 19.0 | | 1.33 | 47.0 | 564 | 1.3829 | 0.4458 | 0.1395 | 0.3547 | 0.3628 | 19.0 | | 1.33 | 48.0 | 576 | 1.3972 | 0.4551 | 0.1369 | 0.357 | 0.3727 | 19.0 | | 1.33 | 49.0 | 588 | 1.4015 | 0.4429 | 0.124 | 0.3449 | 0.3606 | 19.0 | | 1.33 | 50.0 | 600 | 1.3791 | 0.4493 | 0.124 | 0.3515 | 0.3669 | 19.0 | | 1.33 | 51.0 | 612 | 1.4205 | 0.4493 | 0.124 | 0.3515 | 0.3669 | 19.0 | | 1.33 | 52.0 | 624 | 1.4269 | 0.4493 | 0.124 | 0.3515 | 0.3669 | 19.0 | | 1.33 | 53.0 | 636 | 1.3988 | 0.4493 | 0.124 | 0.3515 | 0.3669 | 19.0 | | 1.33 | 54.0 | 648 | 1.4126 | 0.4493 | 0.124 | 0.3515 | 0.3669 | 19.0 | | 1.33 | 55.0 | 660 | 1.4178 | 0.4429 | 0.124 | 0.3449 | 0.3606 | 19.0 | | 1.33 | 56.0 | 672 | 1.4674 | 0.4332 | 0.1189 | 0.3408 | 0.3565 | 19.0 | | 1.33 | 57.0 | 684 | 1.4871 | 0.4543 | 0.1403 | 0.3546 | 0.3703 | 19.0 | | 1.33 | 58.0 | 696 | 1.4709 | 0.4547 | 0.1365 | 0.3567 | 0.3723 | 19.0 | | 1.33 | 59.0 | 708 | 1.4891 | 0.4493 | 0.124 | 0.3515 | 0.3669 | 19.0 | | 1.33 | 60.0 | 720 | 1.5033 | 0.4398 | 0.1109 | 0.3289 | 0.3446 | 19.0 | | 1.33 | 61.0 | 732 | 1.4830 | 0.4398 | 0.1109 | 0.3289 | 0.3446 | 19.0 | | 1.33 | 62.0 | 744 | 1.4642 | 0.4246 | 0.1042 | 0.335 | 0.3507 | 19.0 | | 1.33 | 63.0 | 756 | 1.4480 | 0.4246 | 0.1042 | 0.335 | 0.3507 | 19.0 | | 1.33 | 64.0 | 768 | 1.4312 | 0.4493 | 0.124 | 0.3515 | 0.3669 | 19.0 | | 1.33 | 65.0 | 780 | 1.4761 | 0.4378 | 0.1247 | 0.3458 | 0.3615 | 19.0 | | 1.33 | 66.0 | 792 | 1.4705 | 0.4378 | 0.1247 | 0.3458 | 0.3615 | 19.0 | | 1.33 | 67.0 | 804 | 1.4665 | 0.4493 | 0.124 | 0.3515 | 0.3669 | 19.0 | | 1.33 | 68.0 | 816 | 1.4700 | 0.4493 | 0.124 | 0.3515 | 0.3669 | 19.0 | | 1.33 | 69.0 | 828 | 1.4753 | 0.4493 | 0.124 | 0.3515 | 0.3669 | 19.0 | | 1.33 | 70.0 | 840 | 1.4910 | 0.4351 | 0.113 | 0.3354 | 0.351 | 19.0 | | 1.33 | 71.0 | 852 | 1.4857 | 0.4586 | 0.1505 | 0.3589 | 0.3746 | 19.0 | | 1.33 | 72.0 | 864 | 1.4965 | 0.4481 | 0.1399 | 0.3585 | 0.3727 | 19.0 | | 1.33 | 73.0 | 876 | 1.5141 | 0.4481 | 0.1399 | 0.3585 | 0.3727 | 19.0 | | 1.33 | 74.0 | 888 | 1.5162 | 0.4407 | 0.1358 | 0.3534 | 0.3687 | 19.0 | | 1.33 | 75.0 | 900 | 1.5005 | 0.4523 | 0.1439 | 0.3525 | 0.3682 | 19.0 | | 1.33 | 76.0 | 912 | 1.4910 | 0.417 | 0.1126 | 0.3258 | 0.3396 | 19.0 | | 1.33 | 77.0 | 924 | 1.4811 | 0.4174 | 0.1143 | 0.3375 | 0.3513 | 19.0 | | 1.33 | 78.0 | 936 | 1.4698 | 0.4312 | 0.1281 | 0.3534 | 0.3687 | 19.0 | | 1.33 | 79.0 | 948 | 1.4688 | 0.4298 | 0.1281 | 0.3522 | 0.3666 | 19.0 | | 1.33 | 80.0 | 960 | 1.4665 | 0.4312 | 0.1281 | 0.3534 | 0.3687 | 19.0 | | 1.33 | 81.0 | 972 | 1.4879 | 0.4601 | 0.1469 | 0.3684 | 0.3838 | 19.0 | | 1.33 | 82.0 | 984 | 1.4899 | 0.4601 | 0.1469 | 0.3684 | 0.3838 | 19.0 | | 1.33 | 83.0 | 996 | 1.4859 | 0.4601 | 0.1469 | 0.3684 | 0.3838 | 19.0 | | 0.5425 | 84.0 | 1008 | 1.4906 | 0.4645 | 0.1549 | 0.3684 | 0.3838 | 19.0 | | 0.5425 | 85.0 | 1020 | 1.4987 | 0.4547 | 0.1424 | 0.3567 | 0.3723 | 19.0 | | 0.5425 | 86.0 | 1032 | 1.4982 | 0.4611 | 0.149 | 0.363 | 0.3787 | 19.0 | | 0.5425 | 87.0 | 1044 | 1.4928 | 0.4611 | 0.149 | 0.363 | 0.3787 | 19.0 | | 0.5425 | 88.0 | 1056 | 1.4995 | 0.4611 | 0.149 | 0.363 | 0.3787 | 19.0 | | 0.5425 | 89.0 | 1068 | 1.4994 | 0.4547 | 0.1424 | 0.3567 | 0.3723 | 19.0 | | 0.5425 | 90.0 | 1080 | 1.5050 | 0.4547 | 0.1424 | 0.3567 | 0.3723 | 19.0 | | 0.5425 | 91.0 | 1092 | 1.5118 | 0.4611 | 0.149 | 0.363 | 0.3787 | 19.0 | | 0.5425 | 92.0 | 1104 | 1.5085 | 0.4611 | 0.149 | 0.363 | 0.3787 | 19.0 | | 0.5425 | 93.0 | 1116 | 1.5093 | 0.4611 | 0.149 | 0.363 | 0.3787 | 19.0 | | 0.5425 | 94.0 | 1128 | 1.5149 | 0.4611 | 0.149 | 0.363 | 0.3787 | 19.0 | | 0.5425 | 95.0 | 1140 | 1.5164 | 0.4611 | 0.149 | 0.363 | 0.3787 | 19.0 | | 0.5425 | 96.0 | 1152 | 1.5165 | 0.4611 | 0.149 | 0.363 | 0.3787 | 19.0 | | 0.5425 | 97.0 | 1164 | 1.5167 | 0.4611 | 0.149 | 0.363 | 0.3787 | 19.0 | | 0.5425 | 98.0 | 1176 | 1.5171 | 0.4611 | 0.149 | 0.363 | 0.3787 | 19.0 | | 0.5425 | 99.0 | 1188 | 1.5180 | 0.4709 | 0.1615 | 0.3748 | 0.3905 | 19.0 | | 0.5425 | 100.0 | 1200 | 1.5181 | 0.4709 | 0.1615 | 0.3748 | 0.3905 | 19.0 | ### Framework versions - Transformers 4.39.0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["hdfs_log_summary_dataset"], "metrics": ["rouge"], "base_model": "google/flan-t5-base", "pipeline_tag": "summarization", "model-index": [{"name": "flan-log-sage", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "hdfs_log_summary_dataset", "type": "hdfs_log_summary_dataset", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "rouge", "value": 0.4709, "name": "Rouge1"}]}]}]}
IrwinD/flan-log-sage
null
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "summarization", "dataset:hdfs_log_summary_dataset", "base_model:google/flan-t5-base", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T00:48:54+00:00
[]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #generated_from_trainer #summarization #dataset-hdfs_log_summary_dataset #base_model-google/flan-t5-base #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
flan-log-sage ============= This model is a fine-tuned version of google/flan-t5-base on the hdfs\_log\_summary\_dataset dataset. It achieves the following results on the evaluation set: * Loss: 1.5181 * Rouge1: 0.4709 * Rouge2: 0.1615 * Rougel: 0.3748 * Rougelsum: 0.3905 * Gen Len: 19.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 100 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.0 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #summarization #dataset-hdfs_log_summary_dataset #base_model-google/flan-t5-base #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tom-brady/sn6_255
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T00:52:50+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/hflog/Walmart-the-bag-Misted-v2-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF/resolve/main/Walmart-the-bag-Misted-v2-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF/resolve/main/Walmart-the-bag-Misted-v2-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF/resolve/main/Walmart-the-bag-Misted-v2-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF/resolve/main/Walmart-the-bag-Misted-v2-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF/resolve/main/Walmart-the-bag-Misted-v2-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF/resolve/main/Walmart-the-bag-Misted-v2-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF/resolve/main/Walmart-the-bag-Misted-v2-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF/resolve/main/Walmart-the-bag-Misted-v2-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF/resolve/main/Walmart-the-bag-Misted-v2-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF/resolve/main/Walmart-the-bag-Misted-v2-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF/resolve/main/Walmart-the-bag-Misted-v2-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF/resolve/main/Walmart-the-bag-Misted-v2-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF/resolve/main/Walmart-the-bag-Misted-v2-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF/resolve/main/Walmart-the-bag-Misted-v2-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["code", "mistral", "merge", "slerp"], "base_model": "hflog/Walmart-the-bag-Misted-v2-7B", "quantized_by": "mradermacher"}
mradermacher/Walmart-the-bag-Misted-v2-7B-GGUF
null
[ "transformers", "gguf", "code", "mistral", "merge", "slerp", "en", "base_model:hflog/Walmart-the-bag-Misted-v2-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T00:53:24+00:00
[]
[ "en" ]
TAGS #transformers #gguf #code #mistral #merge #slerp #en #base_model-hflog/Walmart-the-bag-Misted-v2-7B #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #code #mistral #merge #slerp #en #base_model-hflog/Walmart-the-bag-Misted-v2-7B #license-apache-2.0 #endpoints_compatible #region-us \n" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vecvanilla_ctc_zero_infinity This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8214 - Wer: 0.3168 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.4684 | 0.43 | 100 | 1.0567 | 0.4018 | | 1.2572 | 0.85 | 200 | 0.9726 | 0.3706 | | 1.139 | 1.28 | 300 | 0.9748 | 0.3602 | | 1.0956 | 1.71 | 400 | 0.9989 | 0.3619 | | 1.0891 | 2.14 | 500 | 0.9133 | 0.3606 | | 1.063 | 2.56 | 600 | 0.9272 | 0.3548 | | 1.0339 | 2.99 | 700 | 1.0183 | 0.3444 | | 0.9709 | 3.42 | 800 | 0.8244 | 0.3488 | | 0.958 | 3.85 | 900 | 0.8335 | 0.3410 | | 0.8954 | 4.27 | 1000 | 0.8641 | 0.3336 | | 0.8735 | 4.7 | 1100 | 0.8671 | 0.3306 | | 0.8411 | 5.13 | 1200 | 0.8373 | 0.3281 | | 0.805 | 5.56 | 1300 | 0.8197 | 0.3198 | | 0.8452 | 5.98 | 1400 | 0.8343 | 0.3158 | | 0.8078 | 6.41 | 1500 | 0.8392 | 0.3165 | | 0.7946 | 6.84 | 1600 | 0.8214 | 0.3168 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-base-960h", "model-index": [{"name": "wav2vecvanilla_ctc_zero_infinity", "results": []}]}
charris/wav2vecvanilla_ctc_zero_infinity
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-base-960h", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T00:54:50+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-base-960h #license-apache-2.0 #endpoints_compatible #region-us
wav2vecvanilla\_ctc\_zero\_infinity =================================== This model is a fine-tuned version of facebook/wav2vec2-base-960h on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.8214 * Wer: 0.3168 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 7 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 7", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-base-960h #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 7", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
summarization
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BioNLP-intro-disc-eLife-PLOS This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.3739167643078955e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 1.13.1+cu117 - Datasets 2.16.1 - Tokenizers 0.15.2
{"tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "BioNLP-intro-disc-eLife-PLOS", "results": []}]}
dtorber/BioNLP-intro-disc-eLife-PLOS
null
[ "transformers", "safetensors", "led", "text2text-generation", "summarization", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T00:56:10+00:00
[]
[]
TAGS #transformers #safetensors #led #text2text-generation #summarization #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
# BioNLP-intro-disc-eLife-PLOS This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.3739167643078955e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 1.13.1+cu117 - Datasets 2.16.1 - Tokenizers 0.15.2
[ "# BioNLP-intro-disc-eLife-PLOS\n\nThis model was trained from scratch on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.3739167643078955e-06\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 1.13.1+cu117\n- Datasets 2.16.1\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #led #text2text-generation #summarization #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "# BioNLP-intro-disc-eLife-PLOS\n\nThis model was trained from scratch on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.3739167643078955e-06\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 1.13.1+cu117\n- Datasets 2.16.1\n- Tokenizers 0.15.2" ]
visual-question-answering
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
thdangtr/blip_recipe1m_first
null
[ "transformers", "safetensors", "blip", "visual-question-answering", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T00:57:52+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #blip #visual-question-answering #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #blip #visual-question-answering #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
# LuMiNa-V1 <Gallery /> ## Model description An normal AI model. ## Download model Weights for this model are available in Safetensors format. [Download](/synthetica/LuMiNa-V1/tree/main) them in the Files & versions tab.
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "An beautiful forest with a lot of paper airplanes.", "output": {"url": "images/A flock of paper airplanes flutters through a dense jungle, weaving around trees as if they were migrating birds..png"}}, {"text": "An minimalisit portrait of a black sillouette of a man walking on a void.", "output": {"url": "images/Black silhouette of a person standing with his back to a white background, clean shadow style, minimalist art..png"}}], "base_model": "stabilityai/stable-diffusion-xl-base-1.0"}
synthetica/LuMiNa-V1
null
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
null
2024-04-13T01:04:49+00:00
[]
[]
TAGS #diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us
# LuMiNa-V1 <Gallery /> ## Model description An normal AI model. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
[ "# LuMiNa-V1\n\n<Gallery />", "## Model description \n\nAn normal AI model.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
[ "TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us \n", "# LuMiNa-V1\n\n<Gallery />", "## Model description \n\nAn normal AI model.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
null
null
# DavidAU/Einstein-v6-7B-Q6_K-GGUF This model was converted to GGUF format from [`Weyaxi/Einstein-v6-7B`](https://huggingface.co/Weyaxi/Einstein-v6-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v6-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Einstein-v6-7B-Q6_K-GGUF --model einstein-v6-7b.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Einstein-v6-7B-Q6_K-GGUF --model einstein-v6-7b.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m einstein-v6-7b.Q6_K.gguf -n 128 ```
{"language": ["en"], "license": "other", "tags": ["axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama-cpp", "gguf-my-repo"], "datasets": ["allenai/ai2_arc", "camel-ai/physics", "camel-ai/chemistry", "camel-ai/biology", "camel-ai/math", "metaeval/reclor", "openbookqa", "mandyyyyii/scibench", "derek-thomas/ScienceQA", "TIGER-Lab/ScienceEval", "jondurbin/airoboros-3.2", "LDJnr/Capybara", "Cot-Alpaca-GPT4-From-OpenHermes-2.5", "STEM-AI-mtl/Electrical-engineering", "knowrohit07/saraswati-stem", "sablo/oasst2_curated", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "bigbio/med_qa", "meta-math/MetaMathQA-40K", "openbookqa", "piqa", "metaeval/reclor", "derek-thomas/ScienceQA", "scibench", "sciq", "Open-Orca/SlimOrca", "migtissera/Synthia-v1.3", "TIGER-Lab/ScienceEval", "allenai/WildChat", "microsoft/orca-math-word-problems-200k", "openchat/openchat_sharegpt4_dataset", "teknium/GPTeacher-General-Instruct", "m-a-p/CodeFeedback-Filtered-Instruction", "totally-not-an-llm/EverythingLM-data-V3", "HuggingFaceH4/no_robots", "OpenAssistant/oasst_top1_2023-08-25", "WizardLM/WizardLM_evol_instruct_70k"], "base_model": "alpindale/Mistral-7B-v0.2-hf", "model-index": [{"name": "Einstein-v6-7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 63.57, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 82.76, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 62.23, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 52.02}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 78.61, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 63.53, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/Einstein-v6-7B-Q6_K-GGUF
null
[ "gguf", "axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama-cpp", "gguf-my-repo", "en", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "dataset:allenai/WildChat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:teknium/GPTeacher-General-Instruct", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:totally-not-an-llm/EverythingLM-data-V3", "dataset:HuggingFaceH4/no_robots", "dataset:OpenAssistant/oasst_top1_2023-08-25", "dataset:WizardLM/WizardLM_evol_instruct_70k", "base_model:alpindale/Mistral-7B-v0.2-hf", "license:other", "model-index", "region:us" ]
null
2024-04-13T01:08:07+00:00
[]
[ "en" ]
TAGS #gguf #axolotl #generated_from_trainer #Mistral #instruct #finetune #chatml #gpt4 #synthetic data #science #physics #chemistry #biology #math #llama-cpp #gguf-my-repo #en #dataset-allenai/ai2_arc #dataset-camel-ai/physics #dataset-camel-ai/chemistry #dataset-camel-ai/biology #dataset-camel-ai/math #dataset-metaeval/reclor #dataset-openbookqa #dataset-mandyyyyii/scibench #dataset-derek-thomas/ScienceQA #dataset-TIGER-Lab/ScienceEval #dataset-jondurbin/airoboros-3.2 #dataset-LDJnr/Capybara #dataset-Cot-Alpaca-GPT4-From-OpenHermes-2.5 #dataset-STEM-AI-mtl/Electrical-engineering #dataset-knowrohit07/saraswati-stem #dataset-sablo/oasst2_curated #dataset-lmsys/lmsys-chat-1m #dataset-TIGER-Lab/MathInstruct #dataset-bigbio/med_qa #dataset-meta-math/MetaMathQA-40K #dataset-piqa #dataset-scibench #dataset-sciq #dataset-Open-Orca/SlimOrca #dataset-migtissera/Synthia-v1.3 #dataset-allenai/WildChat #dataset-microsoft/orca-math-word-problems-200k #dataset-openchat/openchat_sharegpt4_dataset #dataset-teknium/GPTeacher-General-Instruct #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-totally-not-an-llm/EverythingLM-data-V3 #dataset-HuggingFaceH4/no_robots #dataset-OpenAssistant/oasst_top1_2023-08-25 #dataset-WizardLM/WizardLM_evol_instruct_70k #base_model-alpindale/Mistral-7B-v0.2-hf #license-other #model-index #region-us
# DavidAU/Einstein-v6-7B-Q6_K-GGUF This model was converted to GGUF format from 'Weyaxi/Einstein-v6-7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Einstein-v6-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v6-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #axolotl #generated_from_trainer #Mistral #instruct #finetune #chatml #gpt4 #synthetic data #science #physics #chemistry #biology #math #llama-cpp #gguf-my-repo #en #dataset-allenai/ai2_arc #dataset-camel-ai/physics #dataset-camel-ai/chemistry #dataset-camel-ai/biology #dataset-camel-ai/math #dataset-metaeval/reclor #dataset-openbookqa #dataset-mandyyyyii/scibench #dataset-derek-thomas/ScienceQA #dataset-TIGER-Lab/ScienceEval #dataset-jondurbin/airoboros-3.2 #dataset-LDJnr/Capybara #dataset-Cot-Alpaca-GPT4-From-OpenHermes-2.5 #dataset-STEM-AI-mtl/Electrical-engineering #dataset-knowrohit07/saraswati-stem #dataset-sablo/oasst2_curated #dataset-lmsys/lmsys-chat-1m #dataset-TIGER-Lab/MathInstruct #dataset-bigbio/med_qa #dataset-meta-math/MetaMathQA-40K #dataset-piqa #dataset-scibench #dataset-sciq #dataset-Open-Orca/SlimOrca #dataset-migtissera/Synthia-v1.3 #dataset-allenai/WildChat #dataset-microsoft/orca-math-word-problems-200k #dataset-openchat/openchat_sharegpt4_dataset #dataset-teknium/GPTeacher-General-Instruct #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-totally-not-an-llm/EverythingLM-data-V3 #dataset-HuggingFaceH4/no_robots #dataset-OpenAssistant/oasst_top1_2023-08-25 #dataset-WizardLM/WizardLM_evol_instruct_70k #base_model-alpindale/Mistral-7B-v0.2-hf #license-other #model-index #region-us \n", "# DavidAU/Einstein-v6-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v6-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# Uploaded model - **Developed by:** theprint - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theprint/Mistral-7b-Instruct-v0.2-python-18k
null
[ "transformers", "pytorch", "safetensors", "gguf", "mistral", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T01:10:59+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #safetensors #gguf #mistral #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: theprint - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: theprint\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #pytorch #safetensors #gguf #mistral #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: theprint\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
null
# DavidAU/Einstein-v4-7B-Q6_K-GGUF This model was converted to GGUF format from [`Weyaxi/Einstein-v4-7B`](https://huggingface.co/Weyaxi/Einstein-v4-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v4-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Einstein-v4-7B-Q6_K-GGUF --model einstein-v4-7b.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Einstein-v4-7B-Q6_K-GGUF --model einstein-v4-7b.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m einstein-v4-7b.Q6_K.gguf -n 128 ```
{"language": ["en"], "license": "other", "tags": ["axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama-cpp", "gguf-my-repo"], "datasets": ["allenai/ai2_arc", "camel-ai/physics", "camel-ai/chemistry", "camel-ai/biology", "camel-ai/math", "metaeval/reclor", "openbookqa", "mandyyyyii/scibench", "derek-thomas/ScienceQA", "TIGER-Lab/ScienceEval", "jondurbin/airoboros-3.2", "LDJnr/Capybara", "Cot-Alpaca-GPT4-From-OpenHermes-2.5", "STEM-AI-mtl/Electrical-engineering", "knowrohit07/saraswati-stem", "sablo/oasst2_curated", "glaiveai/glaive-code-assistant", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "bigbio/med_qa", "meta-math/MetaMathQA-40K", "openbookqa", "piqa", "metaeval/reclor", "derek-thomas/ScienceQA", "scibench", "sciq", "Open-Orca/SlimOrca", "migtissera/Synthia-v1.3", "TIGER-Lab/ScienceEval"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "Einstein-v4-7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 64.68, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 83.75, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 62.31, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 55.15}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 76.24, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 57.62, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/Einstein-v4-7B-Q6_K-GGUF
null
[ "gguf", "axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama-cpp", "gguf-my-repo", "en", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:glaiveai/glaive-code-assistant", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "base_model:mistralai/Mistral-7B-v0.1", "license:other", "model-index", "region:us" ]
null
2024-04-13T01:12:09+00:00
[]
[ "en" ]
TAGS #gguf #axolotl #generated_from_trainer #Mistral #instruct #finetune #chatml #gpt4 #synthetic data #science #physics #chemistry #biology #math #llama-cpp #gguf-my-repo #en #dataset-allenai/ai2_arc #dataset-camel-ai/physics #dataset-camel-ai/chemistry #dataset-camel-ai/biology #dataset-camel-ai/math #dataset-metaeval/reclor #dataset-openbookqa #dataset-mandyyyyii/scibench #dataset-derek-thomas/ScienceQA #dataset-TIGER-Lab/ScienceEval #dataset-jondurbin/airoboros-3.2 #dataset-LDJnr/Capybara #dataset-Cot-Alpaca-GPT4-From-OpenHermes-2.5 #dataset-STEM-AI-mtl/Electrical-engineering #dataset-knowrohit07/saraswati-stem #dataset-sablo/oasst2_curated #dataset-glaiveai/glaive-code-assistant #dataset-lmsys/lmsys-chat-1m #dataset-TIGER-Lab/MathInstruct #dataset-bigbio/med_qa #dataset-meta-math/MetaMathQA-40K #dataset-piqa #dataset-scibench #dataset-sciq #dataset-Open-Orca/SlimOrca #dataset-migtissera/Synthia-v1.3 #base_model-mistralai/Mistral-7B-v0.1 #license-other #model-index #region-us
# DavidAU/Einstein-v4-7B-Q6_K-GGUF This model was converted to GGUF format from 'Weyaxi/Einstein-v4-7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Einstein-v4-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v4-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #axolotl #generated_from_trainer #Mistral #instruct #finetune #chatml #gpt4 #synthetic data #science #physics #chemistry #biology #math #llama-cpp #gguf-my-repo #en #dataset-allenai/ai2_arc #dataset-camel-ai/physics #dataset-camel-ai/chemistry #dataset-camel-ai/biology #dataset-camel-ai/math #dataset-metaeval/reclor #dataset-openbookqa #dataset-mandyyyyii/scibench #dataset-derek-thomas/ScienceQA #dataset-TIGER-Lab/ScienceEval #dataset-jondurbin/airoboros-3.2 #dataset-LDJnr/Capybara #dataset-Cot-Alpaca-GPT4-From-OpenHermes-2.5 #dataset-STEM-AI-mtl/Electrical-engineering #dataset-knowrohit07/saraswati-stem #dataset-sablo/oasst2_curated #dataset-glaiveai/glaive-code-assistant #dataset-lmsys/lmsys-chat-1m #dataset-TIGER-Lab/MathInstruct #dataset-bigbio/med_qa #dataset-meta-math/MetaMathQA-40K #dataset-piqa #dataset-scibench #dataset-sciq #dataset-Open-Orca/SlimOrca #dataset-migtissera/Synthia-v1.3 #base_model-mistralai/Mistral-7B-v0.1 #license-other #model-index #region-us \n", "# DavidAU/Einstein-v4-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v4-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/Einstein-v2-7B-Q6_K-GGUF This model was converted to GGUF format from [`Weyaxi/Einstein-v2-7B`](https://huggingface.co/Weyaxi/Einstein-v2-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v2-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Einstein-v2-7B-Q6_K-GGUF --model einstein-v2-7b.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Einstein-v2-7B-Q6_K-GGUF --model einstein-v2-7b.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m einstein-v2-7b.Q6_K.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["axolotl", "generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "einstein-v2-test-model", "results": []}]}
DavidAU/Einstein-v2-7B-Q6_K-GGUF
null
[ "gguf", "axolotl", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-13T01:13:15+00:00
[]
[]
TAGS #gguf #axolotl #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
# DavidAU/Einstein-v2-7B-Q6_K-GGUF This model was converted to GGUF format from 'Weyaxi/Einstein-v2-7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Einstein-v2-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v2-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #axolotl #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n", "# DavidAU/Einstein-v2-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v2-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
{"library_name": "peft", "base_model": "ybelkada/falcon-7b-sharded-bf16"}
hazeum/model
null
[ "peft", "arxiv:1910.09700", "base_model:ybelkada/falcon-7b-sharded-bf16", "region:us" ]
null
2024-04-13T01:19:18+00:00
[ "1910.09700" ]
[]
TAGS #peft #arxiv-1910.09700 #base_model-ybelkada/falcon-7b-sharded-bf16 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.1.dev0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.1.dev0" ]
[ "TAGS\n#peft #arxiv-1910.09700 #base_model-ybelkada/falcon-7b-sharded-bf16 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.1.dev0" ]
null
null
# DavidAU/Einstein-v6-7B-Q4_K_M-GGUF This model was converted to GGUF format from [`Weyaxi/Einstein-v6-7B`](https://huggingface.co/Weyaxi/Einstein-v6-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v6-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Einstein-v6-7B-Q4_K_M-GGUF --model einstein-v6-7b.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Einstein-v6-7B-Q4_K_M-GGUF --model einstein-v6-7b.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m einstein-v6-7b.Q4_K_M.gguf -n 128 ```
{"language": ["en"], "license": "other", "tags": ["axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama-cpp", "gguf-my-repo"], "datasets": ["allenai/ai2_arc", "camel-ai/physics", "camel-ai/chemistry", "camel-ai/biology", "camel-ai/math", "metaeval/reclor", "openbookqa", "mandyyyyii/scibench", "derek-thomas/ScienceQA", "TIGER-Lab/ScienceEval", "jondurbin/airoboros-3.2", "LDJnr/Capybara", "Cot-Alpaca-GPT4-From-OpenHermes-2.5", "STEM-AI-mtl/Electrical-engineering", "knowrohit07/saraswati-stem", "sablo/oasst2_curated", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "bigbio/med_qa", "meta-math/MetaMathQA-40K", "openbookqa", "piqa", "metaeval/reclor", "derek-thomas/ScienceQA", "scibench", "sciq", "Open-Orca/SlimOrca", "migtissera/Synthia-v1.3", "TIGER-Lab/ScienceEval", "allenai/WildChat", "microsoft/orca-math-word-problems-200k", "openchat/openchat_sharegpt4_dataset", "teknium/GPTeacher-General-Instruct", "m-a-p/CodeFeedback-Filtered-Instruction", "totally-not-an-llm/EverythingLM-data-V3", "HuggingFaceH4/no_robots", "OpenAssistant/oasst_top1_2023-08-25", "WizardLM/WizardLM_evol_instruct_70k"], "base_model": "alpindale/Mistral-7B-v0.2-hf", "model-index": [{"name": "Einstein-v6-7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 63.57, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 82.76, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 62.23, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 52.02}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 78.61, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 63.53, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/Einstein-v6-7B-Q4_K_M-GGUF
null
[ "gguf", "axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama-cpp", "gguf-my-repo", "en", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "dataset:allenai/WildChat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:teknium/GPTeacher-General-Instruct", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:totally-not-an-llm/EverythingLM-data-V3", "dataset:HuggingFaceH4/no_robots", "dataset:OpenAssistant/oasst_top1_2023-08-25", "dataset:WizardLM/WizardLM_evol_instruct_70k", "base_model:alpindale/Mistral-7B-v0.2-hf", "license:other", "model-index", "region:us" ]
null
2024-04-13T01:20:15+00:00
[]
[ "en" ]
TAGS #gguf #axolotl #generated_from_trainer #Mistral #instruct #finetune #chatml #gpt4 #synthetic data #science #physics #chemistry #biology #math #llama-cpp #gguf-my-repo #en #dataset-allenai/ai2_arc #dataset-camel-ai/physics #dataset-camel-ai/chemistry #dataset-camel-ai/biology #dataset-camel-ai/math #dataset-metaeval/reclor #dataset-openbookqa #dataset-mandyyyyii/scibench #dataset-derek-thomas/ScienceQA #dataset-TIGER-Lab/ScienceEval #dataset-jondurbin/airoboros-3.2 #dataset-LDJnr/Capybara #dataset-Cot-Alpaca-GPT4-From-OpenHermes-2.5 #dataset-STEM-AI-mtl/Electrical-engineering #dataset-knowrohit07/saraswati-stem #dataset-sablo/oasst2_curated #dataset-lmsys/lmsys-chat-1m #dataset-TIGER-Lab/MathInstruct #dataset-bigbio/med_qa #dataset-meta-math/MetaMathQA-40K #dataset-piqa #dataset-scibench #dataset-sciq #dataset-Open-Orca/SlimOrca #dataset-migtissera/Synthia-v1.3 #dataset-allenai/WildChat #dataset-microsoft/orca-math-word-problems-200k #dataset-openchat/openchat_sharegpt4_dataset #dataset-teknium/GPTeacher-General-Instruct #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-totally-not-an-llm/EverythingLM-data-V3 #dataset-HuggingFaceH4/no_robots #dataset-OpenAssistant/oasst_top1_2023-08-25 #dataset-WizardLM/WizardLM_evol_instruct_70k #base_model-alpindale/Mistral-7B-v0.2-hf #license-other #model-index #region-us
# DavidAU/Einstein-v6-7B-Q4_K_M-GGUF This model was converted to GGUF format from 'Weyaxi/Einstein-v6-7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Einstein-v6-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v6-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #axolotl #generated_from_trainer #Mistral #instruct #finetune #chatml #gpt4 #synthetic data #science #physics #chemistry #biology #math #llama-cpp #gguf-my-repo #en #dataset-allenai/ai2_arc #dataset-camel-ai/physics #dataset-camel-ai/chemistry #dataset-camel-ai/biology #dataset-camel-ai/math #dataset-metaeval/reclor #dataset-openbookqa #dataset-mandyyyyii/scibench #dataset-derek-thomas/ScienceQA #dataset-TIGER-Lab/ScienceEval #dataset-jondurbin/airoboros-3.2 #dataset-LDJnr/Capybara #dataset-Cot-Alpaca-GPT4-From-OpenHermes-2.5 #dataset-STEM-AI-mtl/Electrical-engineering #dataset-knowrohit07/saraswati-stem #dataset-sablo/oasst2_curated #dataset-lmsys/lmsys-chat-1m #dataset-TIGER-Lab/MathInstruct #dataset-bigbio/med_qa #dataset-meta-math/MetaMathQA-40K #dataset-piqa #dataset-scibench #dataset-sciq #dataset-Open-Orca/SlimOrca #dataset-migtissera/Synthia-v1.3 #dataset-allenai/WildChat #dataset-microsoft/orca-math-word-problems-200k #dataset-openchat/openchat_sharegpt4_dataset #dataset-teknium/GPTeacher-General-Instruct #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-totally-not-an-llm/EverythingLM-data-V3 #dataset-HuggingFaceH4/no_robots #dataset-OpenAssistant/oasst_top1_2023-08-25 #dataset-WizardLM/WizardLM_evol_instruct_70k #base_model-alpindale/Mistral-7B-v0.2-hf #license-other #model-index #region-us \n", "# DavidAU/Einstein-v6-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v6-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
# Uploaded model - **Developed by:** Murilovisk - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl", "sft"], "base_model": "unsloth/gemma-7b-bnb-4bit"}
Murilovisk/gemma_model_unsloth
null
[ "transformers", "safetensors", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-13T01:21:37+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #gemma #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
# Uploaded model - Developed by: Murilovisk - License: apache-2.0 - Finetuned from model : unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: Murilovisk\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/gemma-7b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n", "# Uploaded model\n\n- Developed by: Murilovisk\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-7b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Sand-Red/Llama_CXR_OpenI
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T01:22:14+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# DavidAU/Einstein-v4-7B-Q4_K_M-GGUF This model was converted to GGUF format from [`Weyaxi/Einstein-v4-7B`](https://huggingface.co/Weyaxi/Einstein-v4-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v4-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Einstein-v4-7B-Q4_K_M-GGUF --model einstein-v4-7b.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Einstein-v4-7B-Q4_K_M-GGUF --model einstein-v4-7b.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m einstein-v4-7b.Q4_K_M.gguf -n 128 ```
{"language": ["en"], "license": "other", "tags": ["axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama-cpp", "gguf-my-repo"], "datasets": ["allenai/ai2_arc", "camel-ai/physics", "camel-ai/chemistry", "camel-ai/biology", "camel-ai/math", "metaeval/reclor", "openbookqa", "mandyyyyii/scibench", "derek-thomas/ScienceQA", "TIGER-Lab/ScienceEval", "jondurbin/airoboros-3.2", "LDJnr/Capybara", "Cot-Alpaca-GPT4-From-OpenHermes-2.5", "STEM-AI-mtl/Electrical-engineering", "knowrohit07/saraswati-stem", "sablo/oasst2_curated", "glaiveai/glaive-code-assistant", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "bigbio/med_qa", "meta-math/MetaMathQA-40K", "openbookqa", "piqa", "metaeval/reclor", "derek-thomas/ScienceQA", "scibench", "sciq", "Open-Orca/SlimOrca", "migtissera/Synthia-v1.3", "TIGER-Lab/ScienceEval"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "Einstein-v4-7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 64.68, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 83.75, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 62.31, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 55.15}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 76.24, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 57.62, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/Einstein-v4-7B-Q4_K_M-GGUF
null
[ "gguf", "axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama-cpp", "gguf-my-repo", "en", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:glaiveai/glaive-code-assistant", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "base_model:mistralai/Mistral-7B-v0.1", "license:other", "model-index", "region:us" ]
null
2024-04-13T01:24:25+00:00
[]
[ "en" ]
TAGS #gguf #axolotl #generated_from_trainer #Mistral #instruct #finetune #chatml #gpt4 #synthetic data #science #physics #chemistry #biology #math #llama-cpp #gguf-my-repo #en #dataset-allenai/ai2_arc #dataset-camel-ai/physics #dataset-camel-ai/chemistry #dataset-camel-ai/biology #dataset-camel-ai/math #dataset-metaeval/reclor #dataset-openbookqa #dataset-mandyyyyii/scibench #dataset-derek-thomas/ScienceQA #dataset-TIGER-Lab/ScienceEval #dataset-jondurbin/airoboros-3.2 #dataset-LDJnr/Capybara #dataset-Cot-Alpaca-GPT4-From-OpenHermes-2.5 #dataset-STEM-AI-mtl/Electrical-engineering #dataset-knowrohit07/saraswati-stem #dataset-sablo/oasst2_curated #dataset-glaiveai/glaive-code-assistant #dataset-lmsys/lmsys-chat-1m #dataset-TIGER-Lab/MathInstruct #dataset-bigbio/med_qa #dataset-meta-math/MetaMathQA-40K #dataset-piqa #dataset-scibench #dataset-sciq #dataset-Open-Orca/SlimOrca #dataset-migtissera/Synthia-v1.3 #base_model-mistralai/Mistral-7B-v0.1 #license-other #model-index #region-us
# DavidAU/Einstein-v4-7B-Q4_K_M-GGUF This model was converted to GGUF format from 'Weyaxi/Einstein-v4-7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Einstein-v4-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v4-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #axolotl #generated_from_trainer #Mistral #instruct #finetune #chatml #gpt4 #synthetic data #science #physics #chemistry #biology #math #llama-cpp #gguf-my-repo #en #dataset-allenai/ai2_arc #dataset-camel-ai/physics #dataset-camel-ai/chemistry #dataset-camel-ai/biology #dataset-camel-ai/math #dataset-metaeval/reclor #dataset-openbookqa #dataset-mandyyyyii/scibench #dataset-derek-thomas/ScienceQA #dataset-TIGER-Lab/ScienceEval #dataset-jondurbin/airoboros-3.2 #dataset-LDJnr/Capybara #dataset-Cot-Alpaca-GPT4-From-OpenHermes-2.5 #dataset-STEM-AI-mtl/Electrical-engineering #dataset-knowrohit07/saraswati-stem #dataset-sablo/oasst2_curated #dataset-glaiveai/glaive-code-assistant #dataset-lmsys/lmsys-chat-1m #dataset-TIGER-Lab/MathInstruct #dataset-bigbio/med_qa #dataset-meta-math/MetaMathQA-40K #dataset-piqa #dataset-scibench #dataset-sciq #dataset-Open-Orca/SlimOrca #dataset-migtissera/Synthia-v1.3 #base_model-mistralai/Mistral-7B-v0.1 #license-other #model-index #region-us \n", "# DavidAU/Einstein-v4-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v4-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
# c4ai-command-r-plus - EXL2 8.0bpw This is a 8.0bpw EXL2 quant of [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus) Details about the model can be found at the above model page. ## Turbodep EXL2 Quants This repo only has specific quants not already done at [turboderp/command-r-plus-103B-exl2](https://huggingface.co/turboderp/command-r-plus-103B-exl2) Quants marked as turboderp can be downloaded from that repo. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. | Quant Level | Perplexity Score | Repo | |-------------|------------------|------| | 6.0 | 4.7068 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 5.5 | 4.7136 | Dracones | | 5.0 | 4.7309 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.5 | 4.8111 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.25 | 4.8292 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.0 | 4.8603 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.75 | 4.9112 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.5 | 4.9592 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.25 | 5.0631 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.0 | 5.2050 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 2.75 | 5.3820 | Dracones | | 2.5 | 5.6681 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 2.25 | 5.9769 | Dracones | ## EQ Bench Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. | Quant Size | Alpaca | ChatML | Command-R | Command-R-Plus | |------------|--------|--------|--------|--------| | 6.0 | 70.77 | 62.58 | 75.81 | 74.95 | | 5.5 | 71.93 | 67.7 | 74.9 | 75.48 | | 5.0 | 69.51 | 63.94 | 74.92 | 75.28 | _Note:_ EQ Bench scripting not working well, other quants may not be tested. ### Command-R-Plus Template This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS_TOKEN into the starter prompt. _text-generation-webui/instruction-templates/Command-R-Plus.yaml_: ```yaml instruction_template: |- {%- if messages[0]['role'] == 'system' -%} {%- set loop_messages = messages[1:] -%} {%- set system_message = messages[0]['content'] -%} {%- elif false == true -%} {%- set loop_messages = messages -%} {%- set system_message = 'You are Command-R, a brilliant, sophisticated, AI-assistant trained to assist human users by providing thorough responses. You are trained by Cohere.' -%} {%- else -%} {%- set loop_messages = messages -%} {%- set system_message = false -%} {%- endif -%} {%- if system_message != false -%} {{ '<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' + system_message + '<|END_OF_TURN_TOKEN|>' }} {%- endif -%} {%- for message in loop_messages -%} {%- set content = message['content'] -%} {%- if message['role'] == 'user' -%} {{ '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} {%- elif message['role'] == 'assistant' -%} {{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' }} {%- endif -%} ``` ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="c4ai-command-r-plus" BIT_PRECISIONS=(8.0 7.5 7.0 6.5 5.5 2.75 2.25) # MODEL_NAME="turboderp_command-r-plus-103B" # BIT_PRECISIONS=(6.0 5.0 4.5 4.25 4.0 3.75 3.5 3.25 3.0 2.5) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # MODEL_DIR="models/${MODEL_NAME}-exl2_${BIT_PRECISION}bpw" if [ -d "$MODEL_DIR" ]; then output=$(python test_inference.py -m "$MODEL_DIR" -gs 22,24 -ed data/wikitext/wikitext-2-v1.parquet) score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" fi done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="c4ai-command-r-plus" # Define variables MODEL_DIR="models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(5.0) BIT_PRECISIONS=(8.0 7.5 6.5 5.5 2.75 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["exl2"]}
Dracones/c4ai-command-r-plus_exl2_8.0bpw
null
[ "transformers", "safetensors", "cohere", "text-generation", "exl2", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-13T01:24:39+00:00
[]
[ "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar" ]
TAGS #transformers #safetensors #cohere #text-generation #exl2 #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
c4ai-command-r-plus - EXL2 8.0bpw ================================= This is a 8.0bpw EXL2 quant of CohereForAI/c4ai-command-r-plus Details about the model can be found at the above model page. Turbodep EXL2 Quants -------------------- This repo only has specific quants not already done at turboderp/command-r-plus-103B-exl2 Quants marked as turboderp can be downloaded from that repo. EXL2 Version ------------ These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. Perplexity Scoring ------------------ Below are the perplexity scores for the EXL2 models. A lower score is better. Quant Level: 6.0, Perplexity Score: 4.7068, Repo: turboderp Quant Level: 5.5, Perplexity Score: 4.7136, Repo: Dracones Quant Level: 5.0, Perplexity Score: 4.7309, Repo: turboderp Quant Level: 4.5, Perplexity Score: 4.8111, Repo: turboderp Quant Level: 4.25, Perplexity Score: 4.8292, Repo: turboderp Quant Level: 4.0, Perplexity Score: 4.8603, Repo: turboderp Quant Level: 3.75, Perplexity Score: 4.9112, Repo: turboderp Quant Level: 3.5, Perplexity Score: 4.9592, Repo: turboderp Quant Level: 3.25, Perplexity Score: 5.0631, Repo: turboderp Quant Level: 3.0, Perplexity Score: 5.2050, Repo: turboderp Quant Level: 2.75, Perplexity Score: 5.3820, Repo: Dracones Quant Level: 2.5, Perplexity Score: 5.6681, Repo: turboderp Quant Level: 2.25, Perplexity Score: 5.9769, Repo: Dracones EQ Bench -------- Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. *Note:* EQ Bench scripting not working well, other quants may not be tested. ### Command-R-Plus Template This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\_TOKEN into the starter prompt. *text-generation-webui/instruction-templates/Command-R-Plus.yaml*: ### Perplexity Script This was the script used for perplexity testing. Quant Details ------------- This is the script used for quantization.
[ "### Command-R-Plus Template\n\n\nThis is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\\_TOKEN into the starter prompt.\n\n\n*text-generation-webui/instruction-templates/Command-R-Plus.yaml*:", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #cohere #text-generation #exl2 #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "### Command-R-Plus Template\n\n\nThis is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\\_TOKEN into the starter prompt.\n\n\n*text-generation-webui/instruction-templates/Command-R-Plus.yaml*:", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
text-to-image
diffusers
# LuMiNA Realism <Gallery /> ## Model description An ai text to image ## Download model Weights for this model are available in Safetensors format. [Download](/synthetica/luminarealismshaper/tree/main) them in the Files & versions tab.
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "Black silhouette of a person standing with his back to a white background, clean shadow style, minimalist art.", "output": {"url": "images/Black silhouette of a person standing with his back to a white background, clean shadow style, minimalist art..png"}}], "base_model": "stabilityai/sdxl-turbo"}
synthetica/luminarealismshaper
null
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/sdxl-turbo", "has_space", "region:us" ]
null
2024-04-13T01:25:31+00:00
[]
[]
TAGS #diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/sdxl-turbo #has_space #region-us
# LuMiNA Realism <Gallery /> ## Model description An ai text to image ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
[ "# LuMiNA Realism\n\n<Gallery />", "## Model description \n\nAn ai text to image", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
[ "TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/sdxl-turbo #has_space #region-us \n", "# LuMiNA Realism\n\n<Gallery />", "## Model description \n\nAn ai text to image", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
null
null
# DavidAU/Einstein-v2-7B-Q4_K_M-GGUF This model was converted to GGUF format from [`Weyaxi/Einstein-v2-7B`](https://huggingface.co/Weyaxi/Einstein-v2-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v2-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Einstein-v2-7B-Q4_K_M-GGUF --model einstein-v2-7b.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Einstein-v2-7B-Q4_K_M-GGUF --model einstein-v2-7b.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m einstein-v2-7b.Q4_K_M.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["axolotl", "generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "einstein-v2-test-model", "results": []}]}
DavidAU/Einstein-v2-7B-Q4_K_M-GGUF
null
[ "gguf", "axolotl", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-13T01:25:52+00:00
[]
[]
TAGS #gguf #axolotl #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
# DavidAU/Einstein-v2-7B-Q4_K_M-GGUF This model was converted to GGUF format from 'Weyaxi/Einstein-v2-7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Einstein-v2-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v2-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #axolotl #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n", "# DavidAU/Einstein-v2-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v2-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/Einstein-v6-7B-Q8_0-GGUF This model was converted to GGUF format from [`Weyaxi/Einstein-v6-7B`](https://huggingface.co/Weyaxi/Einstein-v6-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v6-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Einstein-v6-7B-Q8_0-GGUF --model einstein-v6-7b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Einstein-v6-7B-Q8_0-GGUF --model einstein-v6-7b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m einstein-v6-7b.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "other", "tags": ["axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama-cpp", "gguf-my-repo"], "datasets": ["allenai/ai2_arc", "camel-ai/physics", "camel-ai/chemistry", "camel-ai/biology", "camel-ai/math", "metaeval/reclor", "openbookqa", "mandyyyyii/scibench", "derek-thomas/ScienceQA", "TIGER-Lab/ScienceEval", "jondurbin/airoboros-3.2", "LDJnr/Capybara", "Cot-Alpaca-GPT4-From-OpenHermes-2.5", "STEM-AI-mtl/Electrical-engineering", "knowrohit07/saraswati-stem", "sablo/oasst2_curated", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "bigbio/med_qa", "meta-math/MetaMathQA-40K", "openbookqa", "piqa", "metaeval/reclor", "derek-thomas/ScienceQA", "scibench", "sciq", "Open-Orca/SlimOrca", "migtissera/Synthia-v1.3", "TIGER-Lab/ScienceEval", "allenai/WildChat", "microsoft/orca-math-word-problems-200k", "openchat/openchat_sharegpt4_dataset", "teknium/GPTeacher-General-Instruct", "m-a-p/CodeFeedback-Filtered-Instruction", "totally-not-an-llm/EverythingLM-data-V3", "HuggingFaceH4/no_robots", "OpenAssistant/oasst_top1_2023-08-25", "WizardLM/WizardLM_evol_instruct_70k"], "base_model": "alpindale/Mistral-7B-v0.2-hf", "model-index": [{"name": "Einstein-v6-7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 63.57, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 82.76, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 62.23, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 52.02}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 78.61, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 63.53, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/Einstein-v6-7B-Q8_0-GGUF
null
[ "gguf", "axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama-cpp", "gguf-my-repo", "en", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "dataset:allenai/WildChat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:teknium/GPTeacher-General-Instruct", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:totally-not-an-llm/EverythingLM-data-V3", "dataset:HuggingFaceH4/no_robots", "dataset:OpenAssistant/oasst_top1_2023-08-25", "dataset:WizardLM/WizardLM_evol_instruct_70k", "base_model:alpindale/Mistral-7B-v0.2-hf", "license:other", "model-index", "region:us" ]
null
2024-04-13T01:27:43+00:00
[]
[ "en" ]
TAGS #gguf #axolotl #generated_from_trainer #Mistral #instruct #finetune #chatml #gpt4 #synthetic data #science #physics #chemistry #biology #math #llama-cpp #gguf-my-repo #en #dataset-allenai/ai2_arc #dataset-camel-ai/physics #dataset-camel-ai/chemistry #dataset-camel-ai/biology #dataset-camel-ai/math #dataset-metaeval/reclor #dataset-openbookqa #dataset-mandyyyyii/scibench #dataset-derek-thomas/ScienceQA #dataset-TIGER-Lab/ScienceEval #dataset-jondurbin/airoboros-3.2 #dataset-LDJnr/Capybara #dataset-Cot-Alpaca-GPT4-From-OpenHermes-2.5 #dataset-STEM-AI-mtl/Electrical-engineering #dataset-knowrohit07/saraswati-stem #dataset-sablo/oasst2_curated #dataset-lmsys/lmsys-chat-1m #dataset-TIGER-Lab/MathInstruct #dataset-bigbio/med_qa #dataset-meta-math/MetaMathQA-40K #dataset-piqa #dataset-scibench #dataset-sciq #dataset-Open-Orca/SlimOrca #dataset-migtissera/Synthia-v1.3 #dataset-allenai/WildChat #dataset-microsoft/orca-math-word-problems-200k #dataset-openchat/openchat_sharegpt4_dataset #dataset-teknium/GPTeacher-General-Instruct #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-totally-not-an-llm/EverythingLM-data-V3 #dataset-HuggingFaceH4/no_robots #dataset-OpenAssistant/oasst_top1_2023-08-25 #dataset-WizardLM/WizardLM_evol_instruct_70k #base_model-alpindale/Mistral-7B-v0.2-hf #license-other #model-index #region-us
# DavidAU/Einstein-v6-7B-Q8_0-GGUF This model was converted to GGUF format from 'Weyaxi/Einstein-v6-7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Einstein-v6-7B-Q8_0-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v6-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #axolotl #generated_from_trainer #Mistral #instruct #finetune #chatml #gpt4 #synthetic data #science #physics #chemistry #biology #math #llama-cpp #gguf-my-repo #en #dataset-allenai/ai2_arc #dataset-camel-ai/physics #dataset-camel-ai/chemistry #dataset-camel-ai/biology #dataset-camel-ai/math #dataset-metaeval/reclor #dataset-openbookqa #dataset-mandyyyyii/scibench #dataset-derek-thomas/ScienceQA #dataset-TIGER-Lab/ScienceEval #dataset-jondurbin/airoboros-3.2 #dataset-LDJnr/Capybara #dataset-Cot-Alpaca-GPT4-From-OpenHermes-2.5 #dataset-STEM-AI-mtl/Electrical-engineering #dataset-knowrohit07/saraswati-stem #dataset-sablo/oasst2_curated #dataset-lmsys/lmsys-chat-1m #dataset-TIGER-Lab/MathInstruct #dataset-bigbio/med_qa #dataset-meta-math/MetaMathQA-40K #dataset-piqa #dataset-scibench #dataset-sciq #dataset-Open-Orca/SlimOrca #dataset-migtissera/Synthia-v1.3 #dataset-allenai/WildChat #dataset-microsoft/orca-math-word-problems-200k #dataset-openchat/openchat_sharegpt4_dataset #dataset-teknium/GPTeacher-General-Instruct #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-totally-not-an-llm/EverythingLM-data-V3 #dataset-HuggingFaceH4/no_robots #dataset-OpenAssistant/oasst_top1_2023-08-25 #dataset-WizardLM/WizardLM_evol_instruct_70k #base_model-alpindale/Mistral-7B-v0.2-hf #license-other #model-index #region-us \n", "# DavidAU/Einstein-v6-7B-Q8_0-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v6-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MLMA_Lab_8_GPT_model This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1509 - Precision: 0.4333 - Recall: 0.5197 - F1: 0.4726 - Accuracy: 0.9564 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3275 | 1.0 | 679 | 0.1747 | 0.2975 | 0.4460 | 0.3569 | 0.9449 | | 0.169 | 2.0 | 1358 | 0.1661 | 0.3892 | 0.4956 | 0.4360 | 0.9510 | | 0.0994 | 3.0 | 2037 | 0.1509 | 0.4333 | 0.5197 | 0.4726 | 0.9564 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.0 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "MLMA_Lab_8_GPT_model", "results": []}]}
shubhanmathur/MLMA_Lab_8_GPT_model
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T01:29:51+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
MLMA\_Lab\_8\_GPT\_model ======================== This model is a fine-tuned version of [](URL on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1509 * Precision: 0.4333 * Recall: 0.5197 * F1: 0.4726 * Accuracy: 0.9564 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.0 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
null
# DavidAU/Einstein-v4-7B-Q8_0-GGUF This model was converted to GGUF format from [`Weyaxi/Einstein-v4-7B`](https://huggingface.co/Weyaxi/Einstein-v4-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v4-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Einstein-v4-7B-Q8_0-GGUF --model einstein-v4-7b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Einstein-v4-7B-Q8_0-GGUF --model einstein-v4-7b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m einstein-v4-7b.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "other", "tags": ["axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama-cpp", "gguf-my-repo"], "datasets": ["allenai/ai2_arc", "camel-ai/physics", "camel-ai/chemistry", "camel-ai/biology", "camel-ai/math", "metaeval/reclor", "openbookqa", "mandyyyyii/scibench", "derek-thomas/ScienceQA", "TIGER-Lab/ScienceEval", "jondurbin/airoboros-3.2", "LDJnr/Capybara", "Cot-Alpaca-GPT4-From-OpenHermes-2.5", "STEM-AI-mtl/Electrical-engineering", "knowrohit07/saraswati-stem", "sablo/oasst2_curated", "glaiveai/glaive-code-assistant", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "bigbio/med_qa", "meta-math/MetaMathQA-40K", "openbookqa", "piqa", "metaeval/reclor", "derek-thomas/ScienceQA", "scibench", "sciq", "Open-Orca/SlimOrca", "migtissera/Synthia-v1.3", "TIGER-Lab/ScienceEval"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "Einstein-v4-7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 64.68, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 83.75, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 62.31, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 55.15}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 76.24, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 57.62, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/Einstein-v4-7B-Q8_0-GGUF
null
[ "gguf", "axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama-cpp", "gguf-my-repo", "en", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:glaiveai/glaive-code-assistant", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "base_model:mistralai/Mistral-7B-v0.1", "license:other", "model-index", "region:us" ]
null
2024-04-13T01:32:09+00:00
[]
[ "en" ]
TAGS #gguf #axolotl #generated_from_trainer #Mistral #instruct #finetune #chatml #gpt4 #synthetic data #science #physics #chemistry #biology #math #llama-cpp #gguf-my-repo #en #dataset-allenai/ai2_arc #dataset-camel-ai/physics #dataset-camel-ai/chemistry #dataset-camel-ai/biology #dataset-camel-ai/math #dataset-metaeval/reclor #dataset-openbookqa #dataset-mandyyyyii/scibench #dataset-derek-thomas/ScienceQA #dataset-TIGER-Lab/ScienceEval #dataset-jondurbin/airoboros-3.2 #dataset-LDJnr/Capybara #dataset-Cot-Alpaca-GPT4-From-OpenHermes-2.5 #dataset-STEM-AI-mtl/Electrical-engineering #dataset-knowrohit07/saraswati-stem #dataset-sablo/oasst2_curated #dataset-glaiveai/glaive-code-assistant #dataset-lmsys/lmsys-chat-1m #dataset-TIGER-Lab/MathInstruct #dataset-bigbio/med_qa #dataset-meta-math/MetaMathQA-40K #dataset-piqa #dataset-scibench #dataset-sciq #dataset-Open-Orca/SlimOrca #dataset-migtissera/Synthia-v1.3 #base_model-mistralai/Mistral-7B-v0.1 #license-other #model-index #region-us
# DavidAU/Einstein-v4-7B-Q8_0-GGUF This model was converted to GGUF format from 'Weyaxi/Einstein-v4-7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Einstein-v4-7B-Q8_0-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v4-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #axolotl #generated_from_trainer #Mistral #instruct #finetune #chatml #gpt4 #synthetic data #science #physics #chemistry #biology #math #llama-cpp #gguf-my-repo #en #dataset-allenai/ai2_arc #dataset-camel-ai/physics #dataset-camel-ai/chemistry #dataset-camel-ai/biology #dataset-camel-ai/math #dataset-metaeval/reclor #dataset-openbookqa #dataset-mandyyyyii/scibench #dataset-derek-thomas/ScienceQA #dataset-TIGER-Lab/ScienceEval #dataset-jondurbin/airoboros-3.2 #dataset-LDJnr/Capybara #dataset-Cot-Alpaca-GPT4-From-OpenHermes-2.5 #dataset-STEM-AI-mtl/Electrical-engineering #dataset-knowrohit07/saraswati-stem #dataset-sablo/oasst2_curated #dataset-glaiveai/glaive-code-assistant #dataset-lmsys/lmsys-chat-1m #dataset-TIGER-Lab/MathInstruct #dataset-bigbio/med_qa #dataset-meta-math/MetaMathQA-40K #dataset-piqa #dataset-scibench #dataset-sciq #dataset-Open-Orca/SlimOrca #dataset-migtissera/Synthia-v1.3 #base_model-mistralai/Mistral-7B-v0.1 #license-other #model-index #region-us \n", "# DavidAU/Einstein-v4-7B-Q8_0-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v4-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) * [aloobun/Reyna-Mini-1.8B-v0.2](https://huggingface.co/aloobun/Reyna-Mini-1.8B-v0.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: aloobun/Reyna-Mini-1.8B-v0.2 layer_range: [0, 23] - sources: - model: Qwen/Qwen1.5-0.5B layer_range: [2, 3] merge_method: passthrough dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Qwen/Qwen1.5-0.5B", "aloobun/Reyna-Mini-1.8B-v0.2"]}
kcoopermiller/reyna-qwen-l2
null
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "base_model:Qwen/Qwen1.5-0.5B", "base_model:aloobun/Reyna-Mini-1.8B-v0.2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T01:33:00+00:00
[]
[]
TAGS #transformers #safetensors #qwen2 #text-generation #mergekit #merge #conversational #base_model-Qwen/Qwen1.5-0.5B #base_model-aloobun/Reyna-Mini-1.8B-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * Qwen/Qwen1.5-0.5B * aloobun/Reyna-Mini-1.8B-v0.2 ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* Qwen/Qwen1.5-0.5B\n* aloobun/Reyna-Mini-1.8B-v0.2", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #qwen2 #text-generation #mergekit #merge #conversational #base_model-Qwen/Qwen1.5-0.5B #base_model-aloobun/Reyna-Mini-1.8B-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* Qwen/Qwen1.5-0.5B\n* aloobun/Reyna-Mini-1.8B-v0.2", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
null
# DavidAU/Einstein-v2-7B-Q8_0-GGUF This model was converted to GGUF format from [`Weyaxi/Einstein-v2-7B`](https://huggingface.co/Weyaxi/Einstein-v2-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v2-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Einstein-v2-7B-Q8_0-GGUF --model einstein-v2-7b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Einstein-v2-7B-Q8_0-GGUF --model einstein-v2-7b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m einstein-v2-7b.Q8_0.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["axolotl", "generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "einstein-v2-test-model", "results": []}]}
DavidAU/Einstein-v2-7B-Q8_0-GGUF
null
[ "gguf", "axolotl", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-13T01:33:06+00:00
[]
[]
TAGS #gguf #axolotl #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
# DavidAU/Einstein-v2-7B-Q8_0-GGUF This model was converted to GGUF format from 'Weyaxi/Einstein-v2-7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Einstein-v2-7B-Q8_0-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v2-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #axolotl #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n", "# DavidAU/Einstein-v2-7B-Q8_0-GGUF\nThis model was converted to GGUF format from 'Weyaxi/Einstein-v2-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3976 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4486 | 0.0 | 1 | 1.4197 | | 1.3118 | 0.0 | 2 | 1.4156 | | 1.479 | 0.0 | 3 | 1.4118 | | 1.2823 | 0.01 | 4 | 1.4085 | | 1.3239 | 0.01 | 5 | 1.4056 | | 1.3008 | 0.01 | 6 | 1.4030 | | 1.4906 | 0.01 | 7 | 1.4009 | | 1.278 | 0.01 | 8 | 1.3992 | | 1.3363 | 0.01 | 9 | 1.3981 | | 1.351 | 0.02 | 10 | 1.3976 | ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "results", "results": []}]}
kta-dev/mistral-7b-results-old
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-13T01:33:32+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
results ======= This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.3976 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
fill-mask
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** JukerHD - **Model type:** Bert - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model [optional]:** Bert ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** jukerhd/dummy
{"language": ["en"], "license": "mit"}
jukerhd/dummy
null
[ "transformers", "safetensors", "camembert", "fill-mask", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T01:33:34+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #camembert #fill-mask #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID This modelcard aims to be a base template for new models. It has been generated using this raw template. ## Model Details ### Model Description - Developed by: JukerHD - Model type: Bert - Language(s) (NLP): English - License: MIT - Finetuned from model [optional]: Bert ### Model Sources [optional] - Repository: jukerhd/dummy
[ "# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: JukerHD\n- Model type: Bert\n- Language(s) (NLP): English\n- License: MIT\n- Finetuned from model [optional]: Bert", "### Model Sources [optional]\n\n\n\n- Repository: jukerhd/dummy" ]
[ "TAGS\n#transformers #safetensors #camembert #fill-mask #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: JukerHD\n- Model type: Bert\n- Language(s) (NLP): English\n- License: MIT\n- Finetuned from model [optional]: Bert", "### Model Sources [optional]\n\n\n\n- Repository: jukerhd/dummy" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) * [aloobun/Reyna-Mini-1.8B-v0.2](https://huggingface.co/aloobun/Reyna-Mini-1.8B-v0.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: aloobun/Reyna-Mini-1.8B-v0.2 layer_range: [0, 23] - sources: - model: Qwen/Qwen1.5-0.5B layer_range: [18, 19] merge_method: passthrough dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Qwen/Qwen1.5-0.5B", "aloobun/Reyna-Mini-1.8B-v0.2"]}
kcoopermiller/reyna-qwen-l18
null
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "base_model:Qwen/Qwen1.5-0.5B", "base_model:aloobun/Reyna-Mini-1.8B-v0.2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T01:33:43+00:00
[]
[]
TAGS #transformers #safetensors #qwen2 #text-generation #mergekit #merge #conversational #base_model-Qwen/Qwen1.5-0.5B #base_model-aloobun/Reyna-Mini-1.8B-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * Qwen/Qwen1.5-0.5B * aloobun/Reyna-Mini-1.8B-v0.2 ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* Qwen/Qwen1.5-0.5B\n* aloobun/Reyna-Mini-1.8B-v0.2", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #qwen2 #text-generation #mergekit #merge #conversational #base_model-Qwen/Qwen1.5-0.5B #base_model-aloobun/Reyna-Mini-1.8B-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* Qwen/Qwen1.5-0.5B\n* aloobun/Reyna-Mini-1.8B-v0.2", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # poetry-rugpt3small This model is a fine-tuned version of [ai-forever/rugpt3small_based_on_gpt2](https://huggingface.co/ai-forever/rugpt3small_based_on_gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.0.1+cu118 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "ai-forever/rugpt3small_based_on_gpt2", "model-index": [{"name": "poetry-rugpt3small", "results": []}]}
Owling797/poetry-rugpt3small
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:ai-forever/rugpt3small_based_on_gpt2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T01:39:39+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-ai-forever/rugpt3small_based_on_gpt2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# poetry-rugpt3small This model is a fine-tuned version of ai-forever/rugpt3small_based_on_gpt2 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.0.1+cu118 - Tokenizers 0.15.2
[ "# poetry-rugpt3small\n\nThis model is a fine-tuned version of ai-forever/rugpt3small_based_on_gpt2 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 3\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.0.1+cu118\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-ai-forever/rugpt3small_based_on_gpt2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# poetry-rugpt3small\n\nThis model is a fine-tuned version of ai-forever/rugpt3small_based_on_gpt2 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 3\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.0.1+cu118\n- Tokenizers 0.15.2" ]
text-generation
transformers
# c4ai-command-r-plus - EXL2 7.5bpw This is a 7.5bpw EXL2 quant of [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus) Details about the model can be found at the above model page. ## Turbodep EXL2 Quants This repo only has specific quants not already done at [turboderp/command-r-plus-103B-exl2](https://huggingface.co/turboderp/command-r-plus-103B-exl2) Quants marked as turboderp can be downloaded from that repo. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. | Quant Level | Perplexity Score | Repo | |-------------|------------------|------| | 6.0 | 4.7068 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 5.5 | 4.7136 | Dracones | | 5.0 | 4.7309 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.5 | 4.8111 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.25 | 4.8292 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.0 | 4.8603 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.75 | 4.9112 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.5 | 4.9592 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.25 | 5.0631 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.0 | 5.2050 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 2.75 | 5.3820 | Dracones | | 2.5 | 5.6681 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 2.25 | 5.9769 | Dracones | ## EQ Bench Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. | Quant Size | Alpaca | ChatML | Command-R | Command-R-Plus | |------------|--------|--------|--------|--------| | 6.0 | 70.77 | 62.58 | 75.81 | 74.95 | | 5.5 | 71.93 | 67.7 | 74.9 | 75.48 | | 5.0 | 69.51 | 63.94 | 74.92 | 75.28 | _Note:_ EQ Bench scripting not working well, other quants may not be tested. ### Command-R-Plus Template This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS_TOKEN into the starter prompt. _text-generation-webui/instruction-templates/Command-R-Plus.yaml_: ```yaml instruction_template: |- {%- if messages[0]['role'] == 'system' -%} {%- set loop_messages = messages[1:] -%} {%- set system_message = messages[0]['content'] -%} {%- elif false == true -%} {%- set loop_messages = messages -%} {%- set system_message = 'You are Command-R, a brilliant, sophisticated, AI-assistant trained to assist human users by providing thorough responses. You are trained by Cohere.' -%} {%- else -%} {%- set loop_messages = messages -%} {%- set system_message = false -%} {%- endif -%} {%- if system_message != false -%} {{ '<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' + system_message + '<|END_OF_TURN_TOKEN|>' }} {%- endif -%} {%- for message in loop_messages -%} {%- set content = message['content'] -%} {%- if message['role'] == 'user' -%} {{ '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} {%- elif message['role'] == 'assistant' -%} {{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' }} {%- endif -%} ``` ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="c4ai-command-r-plus" BIT_PRECISIONS=(8.0 7.5 7.0 6.5 5.5 2.75 2.25) # MODEL_NAME="turboderp_command-r-plus-103B" # BIT_PRECISIONS=(6.0 5.0 4.5 4.25 4.0 3.75 3.5 3.25 3.0 2.5) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # MODEL_DIR="models/${MODEL_NAME}-exl2_${BIT_PRECISION}bpw" if [ -d "$MODEL_DIR" ]; then output=$(python test_inference.py -m "$MODEL_DIR" -gs 22,24 -ed data/wikitext/wikitext-2-v1.parquet) score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" fi done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="c4ai-command-r-plus" # Define variables MODEL_DIR="models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(5.0) BIT_PRECISIONS=(8.0 7.5 6.5 5.5 2.75 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["exl2"]}
Dracones/c4ai-command-r-plus_exl2_7.5bpw
null
[ "transformers", "safetensors", "cohere", "text-generation", "exl2", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T01:42:06+00:00
[]
[ "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar" ]
TAGS #transformers #safetensors #cohere #text-generation #exl2 #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
c4ai-command-r-plus - EXL2 7.5bpw ================================= This is a 7.5bpw EXL2 quant of CohereForAI/c4ai-command-r-plus Details about the model can be found at the above model page. Turbodep EXL2 Quants -------------------- This repo only has specific quants not already done at turboderp/command-r-plus-103B-exl2 Quants marked as turboderp can be downloaded from that repo. EXL2 Version ------------ These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. Perplexity Scoring ------------------ Below are the perplexity scores for the EXL2 models. A lower score is better. Quant Level: 6.0, Perplexity Score: 4.7068, Repo: turboderp Quant Level: 5.5, Perplexity Score: 4.7136, Repo: Dracones Quant Level: 5.0, Perplexity Score: 4.7309, Repo: turboderp Quant Level: 4.5, Perplexity Score: 4.8111, Repo: turboderp Quant Level: 4.25, Perplexity Score: 4.8292, Repo: turboderp Quant Level: 4.0, Perplexity Score: 4.8603, Repo: turboderp Quant Level: 3.75, Perplexity Score: 4.9112, Repo: turboderp Quant Level: 3.5, Perplexity Score: 4.9592, Repo: turboderp Quant Level: 3.25, Perplexity Score: 5.0631, Repo: turboderp Quant Level: 3.0, Perplexity Score: 5.2050, Repo: turboderp Quant Level: 2.75, Perplexity Score: 5.3820, Repo: Dracones Quant Level: 2.5, Perplexity Score: 5.6681, Repo: turboderp Quant Level: 2.25, Perplexity Score: 5.9769, Repo: Dracones EQ Bench -------- Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. *Note:* EQ Bench scripting not working well, other quants may not be tested. ### Command-R-Plus Template This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\_TOKEN into the starter prompt. *text-generation-webui/instruction-templates/Command-R-Plus.yaml*: ### Perplexity Script This was the script used for perplexity testing. Quant Details ------------- This is the script used for quantization.
[ "### Command-R-Plus Template\n\n\nThis is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\\_TOKEN into the starter prompt.\n\n\n*text-generation-webui/instruction-templates/Command-R-Plus.yaml*:", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #cohere #text-generation #exl2 #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Command-R-Plus Template\n\n\nThis is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\\_TOKEN into the starter prompt.\n\n\n*text-generation-webui/instruction-templates/Command-R-Plus.yaml*:", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
reinforcement-learning
null
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-pixelcopter_V1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "43.80 +/- 38.59", "name": "mean_reward", "verified": false}]}]}]}
pdx97/Reinforce-pixelcopter_V1
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-13T01:43:03+00:00
[]
[]
TAGS #Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing Pixelcopter-PLE-v0 This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fakenews_binaryclassifier_distilbert_cased This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6203 - Accuracy: 0.736 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6867 | 1.0 | 63 | 0.6704 | 0.648 | | 0.6618 | 2.0 | 126 | 0.6406 | 0.684 | | 0.6335 | 3.0 | 189 | 0.6203 | 0.736 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-cased", "model-index": [{"name": "fakenews_binaryclassifier_distilbert_cased", "results": []}]}
vishalk4u/fakenews_binaryclassifier_distilbert_cased
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T01:43:48+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
fakenews\_binaryclassifier\_distilbert\_cased ============================================= This model is a fine-tuned version of distilbert-base-cased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.6203 * Accuracy: 0.736 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-06 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-06\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-06\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
exllama2 quantization - 4bpw # Rhea-72b-v0.5 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64241c3d774cc340797429fc/97nXDuEhQUom3vaVcEvV-.jpeg) The Rhea project is a project that conducts research on various learning methods to improve llm model performance. We fine-tuned the existing model using the [nox](https://github.com/davidkim205/nox) framework. We built a dataset for SFT learning based on the currently open dataset, and created a dataset using SGD (Self-Generated Dataset Creation Method for DPO Learning) for DPO learning. Our model ranked first on HuggingFace's Open LLM leaderboard. ## SGD : A Study on Self-Generated Dataset creation method for DPO Learning This method proposes a novel method for generating datasets for DPO (Self-supervised Learning) models. We suggest a technique where sentences generated by the model are compared with the actual correct answers from an existing dataset, and sentences where the model's generated results do not match the correct answers are added. This enables the model to autonomously create training data, thereby enhancing the performance of DPO models. ## Model Details * **Model Developers** : davidkim(changyeon kim) * **Repository** : [https://github.com/davidkim205/nox](https://github.com/davidkim205/nox) * **base mode** : abacusai/Smaug-72B-v0.1 * **sft dataset** : datasets_enconv_4m * **dpo dataset** : datasets_encomp_151k ## sft dataset info : datasets_enconv_4m ### 100k random shuffle datasets - stack-exchange-preferences - SlimOrca - alpaca-gpt4 - SHP - HC3 - databricks-dolly-15k - orca-dpo-pairs - us-stockname - OpenHermes2.5-dpo-binarized-alpha - distilabel-math-preference-dpo - Neural-DPO - truthy-dpo-v0.1 - distilabel-capybara-dpo-7k-binarized - us-sentiment - contextual-dpo-v0.1 ### 1k random shuffle datasets - bigbench - glue_mnli - glue_qqp - xnli - codexglue_code2text_go - trivia_qa - medmcqa - hendrycks_ethics - super_glue_record - glue_qnli - anli_r3 - swag - squad_v2 - nq_open - drop - glue_sst2 - blimp - paws-x - unscramble - anli_r2 - babi - math_qa - social_i_qa - piqa - arithmetic - anli_r1 - prost - sciq - mc_taco - medqa - super_glue_boolq - hendrycks_math - lambada - toxigen-data - glue_cola - pubmed_qa - logiqa - mutual - headqa - bbh - super_glue_wic - openbookqa - glue_mrpc - web_questions - qasper - super_glue_multirc - story_cloze - super_glue_rte - glue_rte - race - xwinograd - asdiv - xstory_cloze - crows_pairs_multilingual - belebele - glue_wnli - super_glue_wsc - coqa - super_glue_copa - super_glue_cb - winograd_wsc - mgsm - scrolls_contract_nli * If the data set cannot be found, it is internal company data and cannot be made public. ## dpo dataset info : datasets_encomp_151k Randomly selecting data from each category within the training dataset, we constructed a DPO (Direct Preference Optimization) dataset using sentences with logits lower than the mean within the model-generated sentences. * I'm sorry I can't reveal it. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5) | Metric |Value| |---------------------------------|----:| |Avg. |81.22| |AI2 Reasoning Challenge (25-Shot)|79.78| |HellaSwag (10-Shot) |91.15| |MMLU (5-Shot) |77.95| |TruthfulQA (0-shot) |74.50| |Winogrande (5-shot) |87.85| |GSM8k (5-shot) |76.12|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "model-index": [{"name": "Rhea-72b-v0.5", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 79.78, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=davidkim205/Rhea-72b-v0.5", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 91.15, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=davidkim205/Rhea-72b-v0.5", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 77.95, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=davidkim205/Rhea-72b-v0.5", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 74.5}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=davidkim205/Rhea-72b-v0.5", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 87.85, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=davidkim205/Rhea-72b-v0.5", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 76.12, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=davidkim205/Rhea-72b-v0.5", "name": "Open LLM Leaderboard"}}]}]}
titan087/Rhea-72b-v0.5-exl2-4bpw
null
[ "transformers", "safetensors", "llama", "text-generation", "en", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-13T01:44:05+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #en #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
exllama2 quantization - 4bpw Rhea-72b-v0.5 ============= !image/jpeg The Rhea project is a project that conducts research on various learning methods to improve llm model performance. We fine-tuned the existing model using the nox framework. We built a dataset for SFT learning based on the currently open dataset, and created a dataset using SGD (Self-Generated Dataset Creation Method for DPO Learning) for DPO learning. Our model ranked first on HuggingFace's Open LLM leaderboard. SGD : A Study on Self-Generated Dataset creation method for DPO Learning ------------------------------------------------------------------------ This method proposes a novel method for generating datasets for DPO (Self-supervised Learning) models. We suggest a technique where sentences generated by the model are compared with the actual correct answers from an existing dataset, and sentences where the model's generated results do not match the correct answers are added. This enables the model to autonomously create training data, thereby enhancing the performance of DPO models. Model Details ------------- * Model Developers : davidkim(changyeon kim) * Repository : URL * base mode : abacusai/Smaug-72B-v0.1 * sft dataset : datasets\_enconv\_4m * dpo dataset : datasets\_encomp\_151k sft dataset info : datasets\_enconv\_4m --------------------------------------- ### 100k random shuffle datasets * stack-exchange-preferences * SlimOrca * alpaca-gpt4 * SHP * HC3 * databricks-dolly-15k * orca-dpo-pairs * us-stockname * OpenHermes2.5-dpo-binarized-alpha * distilabel-math-preference-dpo * Neural-DPO * truthy-dpo-v0.1 * distilabel-capybara-dpo-7k-binarized * us-sentiment * contextual-dpo-v0.1 ### 1k random shuffle datasets * bigbench * glue\_mnli * glue\_qqp * xnli * codexglue\_code2text\_go * trivia\_qa * medmcqa * hendrycks\_ethics * super\_glue\_record * glue\_qnli * anli\_r3 * swag * squad\_v2 * nq\_open * drop * glue\_sst2 * blimp * paws-x * unscramble * anli\_r2 * babi * math\_qa * social\_i\_qa * piqa * arithmetic * anli\_r1 * prost * sciq * mc\_taco * medqa * super\_glue\_boolq * hendrycks\_math * lambada * toxigen-data * glue\_cola * pubmed\_qa * logiqa * mutual * headqa * bbh * super\_glue\_wic * openbookqa * glue\_mrpc * web\_questions * qasper * super\_glue\_multirc * story\_cloze * super\_glue\_rte * glue\_rte * race * xwinograd * asdiv * xstory\_cloze * crows\_pairs\_multilingual * belebele * glue\_wnli * super\_glue\_wsc * coqa * super\_glue\_copa * super\_glue\_cb * winograd\_wsc * mgsm * scrolls\_contract\_nli * If the data set cannot be found, it is internal company data and cannot be made public. dpo dataset info : datasets\_encomp\_151k ----------------------------------------- Randomly selecting data from each category within the training dataset, we constructed a DPO (Direct Preference Optimization) dataset using sentences with logits lower than the mean within the model-generated sentences. * I'm sorry I can't reveal it. Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[ "### 100k random shuffle datasets\n\n\n* stack-exchange-preferences\n* SlimOrca\n* alpaca-gpt4\n* SHP\n* HC3\n* databricks-dolly-15k\n* orca-dpo-pairs\n* us-stockname\n* OpenHermes2.5-dpo-binarized-alpha\n* distilabel-math-preference-dpo\n* Neural-DPO\n* truthy-dpo-v0.1\n* distilabel-capybara-dpo-7k-binarized\n* us-sentiment\n* contextual-dpo-v0.1", "### 1k random shuffle datasets\n\n\n* bigbench\n* glue\\_mnli\n* glue\\_qqp\n* xnli\n* codexglue\\_code2text\\_go\n* trivia\\_qa\n* medmcqa\n* hendrycks\\_ethics\n* super\\_glue\\_record\n* glue\\_qnli\n* anli\\_r3\n* swag\n* squad\\_v2\n* nq\\_open\n* drop\n* glue\\_sst2\n* blimp\n* paws-x\n* unscramble\n* anli\\_r2\n* babi\n* math\\_qa\n* social\\_i\\_qa\n* piqa\n* arithmetic\n* anli\\_r1\n* prost\n* sciq\n* mc\\_taco\n* medqa\n* super\\_glue\\_boolq\n* hendrycks\\_math\n* lambada\n* toxigen-data\n* glue\\_cola\n* pubmed\\_qa\n* logiqa\n* mutual\n* headqa\n* bbh\n* super\\_glue\\_wic\n* openbookqa\n* glue\\_mrpc\n* web\\_questions\n* qasper\n* super\\_glue\\_multirc\n* story\\_cloze\n* super\\_glue\\_rte\n* glue\\_rte\n* race\n* xwinograd\n* asdiv\n* xstory\\_cloze\n* crows\\_pairs\\_multilingual\n* belebele\n* glue\\_wnli\n* super\\_glue\\_wsc\n* coqa\n* super\\_glue\\_copa\n* super\\_glue\\_cb\n* winograd\\_wsc\n* mgsm\n* scrolls\\_contract\\_nli\n\n\n* If the data set cannot be found, it is internal company data and cannot be made public.\n\n\ndpo dataset info : datasets\\_encomp\\_151k\n-----------------------------------------\n\n\nRandomly selecting data from each category within the training dataset, we constructed a DPO (Direct Preference Optimization) dataset using sentences with logits lower than the mean within the model-generated sentences.\n\n\n* I'm sorry I can't reveal it.\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #en #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "### 100k random shuffle datasets\n\n\n* stack-exchange-preferences\n* SlimOrca\n* alpaca-gpt4\n* SHP\n* HC3\n* databricks-dolly-15k\n* orca-dpo-pairs\n* us-stockname\n* OpenHermes2.5-dpo-binarized-alpha\n* distilabel-math-preference-dpo\n* Neural-DPO\n* truthy-dpo-v0.1\n* distilabel-capybara-dpo-7k-binarized\n* us-sentiment\n* contextual-dpo-v0.1", "### 1k random shuffle datasets\n\n\n* bigbench\n* glue\\_mnli\n* glue\\_qqp\n* xnli\n* codexglue\\_code2text\\_go\n* trivia\\_qa\n* medmcqa\n* hendrycks\\_ethics\n* super\\_glue\\_record\n* glue\\_qnli\n* anli\\_r3\n* swag\n* squad\\_v2\n* nq\\_open\n* drop\n* glue\\_sst2\n* blimp\n* paws-x\n* unscramble\n* anli\\_r2\n* babi\n* math\\_qa\n* social\\_i\\_qa\n* piqa\n* arithmetic\n* anli\\_r1\n* prost\n* sciq\n* mc\\_taco\n* medqa\n* super\\_glue\\_boolq\n* hendrycks\\_math\n* lambada\n* toxigen-data\n* glue\\_cola\n* pubmed\\_qa\n* logiqa\n* mutual\n* headqa\n* bbh\n* super\\_glue\\_wic\n* openbookqa\n* glue\\_mrpc\n* web\\_questions\n* qasper\n* super\\_glue\\_multirc\n* story\\_cloze\n* super\\_glue\\_rte\n* glue\\_rte\n* race\n* xwinograd\n* asdiv\n* xstory\\_cloze\n* crows\\_pairs\\_multilingual\n* belebele\n* glue\\_wnli\n* super\\_glue\\_wsc\n* coqa\n* super\\_glue\\_copa\n* super\\_glue\\_cb\n* winograd\\_wsc\n* mgsm\n* scrolls\\_contract\\_nli\n\n\n* If the data set cannot be found, it is internal company data and cannot be made public.\n\n\ndpo dataset info : datasets\\_encomp\\_151k\n-----------------------------------------\n\n\nRandomly selecting data from each category within the training dataset, we constructed a DPO (Direct Preference Optimization) dataset using sentences with logits lower than the mean within the model-generated sentences.\n\n\n* I'm sorry I can't reveal it.\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here" ]
null
null
GGUF quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Yi-6B-200K - GGUF - Model creator: https://huggingface.co/01-ai/ - Original model: https://huggingface.co/01-ai/Yi-6B-200K/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Yi-6B-200K.Q2_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q2_K.gguf) | Q2_K | 2.18GB | | [Yi-6B-200K.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.IQ3_XS.gguf) | IQ3_XS | 2.41GB | | [Yi-6B-200K.IQ3_S.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.IQ3_S.gguf) | IQ3_S | 2.53GB | | [Yi-6B-200K.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q3_K_S.gguf) | Q3_K_S | 2.52GB | | [Yi-6B-200K.IQ3_M.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.IQ3_M.gguf) | IQ3_M | 2.62GB | | [Yi-6B-200K.Q3_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q3_K.gguf) | Q3_K | 2.79GB | | [Yi-6B-200K.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q3_K_M.gguf) | Q3_K_M | 2.79GB | | [Yi-6B-200K.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q3_K_L.gguf) | Q3_K_L | 3.01GB | | [Yi-6B-200K.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.IQ4_XS.gguf) | IQ4_XS | 3.11GB | | [Yi-6B-200K.Q4_0.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q4_0.gguf) | Q4_0 | 3.24GB | | [Yi-6B-200K.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.IQ4_NL.gguf) | IQ4_NL | 3.27GB | | [Yi-6B-200K.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q4_K_S.gguf) | Q4_K_S | 3.26GB | | [Yi-6B-200K.Q4_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q4_K.gguf) | Q4_K | 3.42GB | | [Yi-6B-200K.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q4_K_M.gguf) | Q4_K_M | 3.42GB | | [Yi-6B-200K.Q4_1.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q4_1.gguf) | Q4_1 | 3.58GB | | [Yi-6B-200K.Q5_0.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q5_0.gguf) | Q5_0 | 3.92GB | | [Yi-6B-200K.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q5_K_S.gguf) | Q5_K_S | 3.92GB | | [Yi-6B-200K.Q5_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q5_K.gguf) | Q5_K | 4.01GB | | [Yi-6B-200K.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q5_K_M.gguf) | Q5_K_M | 4.01GB | | [Yi-6B-200K.Q5_1.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q5_1.gguf) | Q5_1 | 4.25GB | | [Yi-6B-200K.Q6_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-6B-200K-gguf/blob/main/Yi-6B-200K.Q6_K.gguf) | Q6_K | 4.63GB | Original model description: --- license: other license_name: yi-license license_link: LICENSE widget: - example_title: "Yi-34B-Chat" text: "hi" output: text: " Hello! How can I assist you today?" - example_title: "Yi-34B" text: "There's a place where time stands still. A place of breath taking wonder, but also" output: text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipeline_tag: text-generation --- <div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px"> <img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg"> </picture> </br> </br> <div style="display: inline-block;"> <a href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml"> <img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg"> </a> </div> <div style="display: inline-block;"> <a href="https://github.com/01-ai/Yi/blob/main/LICENSE"> <img src="https://img.shields.io/badge/Code_License-Apache_2.0-lightblue"> </a> </div> <div style="display: inline-block;"> <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt"> <img src="https://img.shields.io/badge/Model_License-Yi_License-lightblue"> </a> </div> <div style="display: inline-block;"> <a href="mailto:[email protected]"> <img src="https://img.shields.io/badge/✉️[email protected]"> </a> </div> </div> <div align="center"> <h3 align="center">Building the Next Generation of Open-Source and Bilingual LLMs</h3> </div> <p align="center"> 🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • ✡️ <a href="https://wisemodel.cn/organization/01.AI" target="_blank">WiseModel</a> </p> <p align="center"> 👩‍🚀 Ask questions or discuss ideas on <a href="01-ai/Yi · Discussions" target="_blank"> GitHub </a> </p> <p align="center"> 👋 Join us on <a href="https://discord.gg/hYUwWddeAu" target="_blank"> 👾 Discord </a> or <a href="有官方的微信群嘛 · Issue #43 · 01-ai/Yi" target="_blank"> 💬 WeChat </a> </p> <p align="center"> 📝 Check out <a href="https://arxiv.org/abs/2403.04652"> Yi Tech Report </a> </p> <p align="center"> 📚 Grow at <a href="#learning-hub"> Yi Learning Hub </a> </p> <!-- DO NOT REMOVE ME --> <hr> <details open> <summary></b>📕 Table of Contents</b></summary> - [What is Yi?](#what-is-yi) - [Introduction](#introduction) - [Models](#models) - [Chat models](#chat-models) - [Base models](#base-models) - [Model info](#model-info) - [News](#news) - [How to use Yi?](#how-to-use-yi) - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [llama.cpp](#quick-start---llamacpp) - [conda-lock](#quick-start---conda-lock) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [Learning hub](#learning-hub) - [Why Yi?](#why-yi) - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Base model performance](#base-model-performance) - [Chat model performance](#chat-model-performance) - [Tech report](#tech-report) - [Citation](#citation) - [Who can use Yi?](#who-can-use-yi) - [Misc.](#misc) - [Acknowledgements](#acknowledgments) - [Disclaimer](#disclaimer) - [License](#license) </details> <hr> # What is Yi? ## Introduction - 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/). - 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example, - Yi-34B-Chat model **landed in second place (following GPT-4 Turbo)**, outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024). - Yi-34B model **ranked first among all existing open-source models** (such as Falcon-180B, Llama-70B, Claude) in **both English and Chinese** on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023). - 🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem. <details style="display: inline;"><summary> If you're interested in Yi's adoption of Llama architecture and license usage policy, see <span style="color: green;">Yi's relation with Llama.</span> ⬇️</summary> <ul> <br> > 💡 TL;DR > > The Yi series models adopt the same model architecture as Llama but are **NOT** derivatives of Llama. - Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018. - Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi. - Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems. - However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights. - As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure. - Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/). </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## News <details> <summary>🎯 <b>2024-03-16</b>: The <code>Yi-9B-200K</code> is open-sourced and available to the public.</summary> </details> <details open> <summary>🎯 <b>2024-03-08</b>: <a href="https://arxiv.org/abs/2403.04652">Yi Tech Report</a> is published! </summary> </details> <details open> <summary>🔔 <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary> <br> In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance. </details> <details open> <summary>🎯 <b>2024-03-06</b>: The <code>Yi-9B</code> is open-sourced and available to the public.</summary> <br> <code>Yi-9B</code> stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. </details> <details open> <summary>🎯 <b>2024-01-23</b>: The Yi-VL models, <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> and <code><a href="https://huggingface.co/01-ai/Yi-VL-6B">Yi-VL-6B</a></code>, are open-sourced and available to the public.</summary> <br> <code><a href="https://huggingface.co/01-ai/Yi-VL-34B">Yi-VL-34B</a></code> has ranked <strong>first</strong> among all existing open-source models in the latest benchmarks, including <a href="https://arxiv.org/abs/2311.16502">MMMU</a> and <a href="https://arxiv.org/abs/2401.11944">CMMMU</a> (based on data available up to January 2024).</li> </details> <details> <summary>🎯 <b>2023-11-23</b>: <a href="#chat-models">Chat models</a> are open-sourced and available to the public.</summary> <br>This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ. - `Yi-34B-Chat` - `Yi-34B-Chat-4bits` - `Yi-34B-Chat-8bits` - `Yi-6B-Chat` - `Yi-6B-Chat-4bits` - `Yi-6B-Chat-8bits` You can try some of them interactively at: - [Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Replicate](https://replicate.com/01-ai) </details> <details> <summary>🔔 <b>2023-11-23</b>: The Yi Series Models Community License Agreement is updated to <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">v2.1</a>.</summary> </details> <details> <summary>🔥 <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary> <br>Application form: - [English](https://cn.mikecrm.com/l91ODJf) - [Chinese](https://cn.mikecrm.com/gnEZjiQ) </details> <details> <summary>🎯 <b>2023-11-05</b>: <a href="#base-models">The base models, </a><code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>, are open-sourced and available to the public.</summary> <br>This release contains two base models with the same parameter sizes as the previous release, except that the context window is extended to 200K. </details> <details> <summary>🎯 <b>2023-11-02</b>: <a href="#base-models">The base models, </a><code>Yi-6B</code> and <code>Yi-34B</code>, are open-sourced and available to the public.</summary> <br>The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time. </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Models Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements. If you want to deploy Yi models, make sure you meet the [software and hardware requirements](#deployment). ### Chat models | Model | Download |---|--- Yi-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat/summary) Yi-34B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-4bits/summary) Yi-34B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-Chat-8bits/summary) Yi-6B-Chat| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat/summary) Yi-6B-Chat-4bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-4bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-4bits/summary) Yi-6B-Chat-8bits | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-Chat-8bits) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-Chat-8bits/summary) <sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). </sup></sub> ### Base models | Model | Download | |---|---| Yi-34B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B/summary) Yi-34B-200K|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-34B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-34B-200K/summary) Yi-9B|• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B) Yi-9B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-9B-200K) • [🤖 ModelScope](https://wisemodel.cn/models/01.AI/Yi-9B-200K) Yi-6B| • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B/summary) Yi-6B-200K | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-6B-200K) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-6B-200K/summary) <sub><sup> - 200k is roughly equivalent to 400,000 Chinese characters. <br> - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run `git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf` to download the weight. </sup></sub> ### Model info - For chat and base models <table> <thead> <tr> <th>Model</th> <th>Intro</th> <th>Default context window</th> <th>Pretrained tokens</th> <th>Training Data Date</th> </tr> </thead> <tbody><tr> <td>6B series models</td> <td>They are suitable for personal and academic use.</td> <td rowspan="3">4K</td> <td>3T</td> <td rowspan="3">Up to June 2023</td> </tr> <tr> <td>9B series models</td> <td>It is the best at coding and math in the Yi series models.</td> <td>Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.</td> </tr> <tr> <td>34B series models</td> <td>They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It&#39;s a cost-effective solution that&#39;s affordable and equipped with emergent ability.</td> <td>3T</td> </tr> </tbody></table> - For chat models <details style="display: inline;"><summary>For chat model limitations, see the explanations below. ⬇️</summary> <ul> <br>The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training. <br>However, this higher diversity might amplify certain existing issues, including: <li>Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.</li> <li>Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.</li> <li>Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.</li> <li>To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top_p, or top_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.</li> </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # How to use Yi? - [Quick start](#quick-start) - [Choose your path](#choose-your-path) - [pip](#quick-start---pip) - [docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - [llama.cpp](#quick-start---llamacpp) - [Web demo](#web-demo) - [Fine-tuning](#fine-tuning) - [Quantization](#quantization) - [Deployment](#deployment) - [Learning hub](#learning-hub) ## Quick start Getting up and running with Yi models is simple with multiple choices available. ### Choose your path Select one of the following paths to begin your journey with Yi! ![Quick start - Choose your path](https://github.com/01-ai/Yi/blob/main/assets/img/quick_start_path.png?raw=true) #### 🎯 Deploy Yi locally If you prefer to deploy Yi models locally, - 🙋‍♀️ and you have **sufficient** resources (for example, NVIDIA A800 80GB), you can choose one of the following methods: - [pip](#quick-start---pip) - [Docker](#quick-start---docker) - [conda-lock](#quick-start---conda-lock) - 🙋‍♀️ and you have **limited** resources (for example, a MacBook Pro), you can use [llama.cpp](#quick-start---llamacpp). #### 🎯 Not to deploy Yi locally If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options. ##### 🙋‍♀️ Run Yi with APIs If you want to explore more features of Yi, you can adopt one of these methods: - Yi APIs (Yi official) - [Early access has been granted](https://x.com/01AI_Yi/status/1735728934560600536?s=20) to some applicants. Stay tuned for the next round of access! - [Yi APIs](https://replicate.com/01-ai/yi-34b-chat/api?tab=nodejs) (Replicate) ##### 🙋‍♀️ Run Yi in playground If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options: - [Yi-34B-Chat-Playground](https://platform.lingyiwanwu.com/prompt/playground) (Yi official) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). - [Yi-34B-Chat-Playground](https://replicate.com/01-ai/yi-34b-chat) (Replicate) ##### 🙋‍♀️ Chat with Yi If you want to chat with Yi, you can use one of these online services, which offer a similar user experience: - [Yi-34B-Chat](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) (Yi official on Hugging Face) - No registration is required. - [Yi-34B-Chat](https://platform.lingyiwanwu.com/) (Yi official beta) - Access is available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)). <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - pip This tutorial guides you through every step of running **Yi-34B-Chat locally on an A800 (80G)** and then performing inference. #### Step 0: Prerequisites - Make sure Python 3.10 or a later version is installed. - If you want to run other Yi models, see [software and hardware requirements](#deployment). #### Step 1: Prepare your environment To set up the environment and install the required packages, execute the following command. ```bash git clone https://github.com/01-ai/Yi.git cd yi pip install -r requirements.txt ``` #### Step 2: Download the Yi model You can download the weights and tokenizer of Yi models from the following sources: - [Hugging Face](https://huggingface.co/01-ai) - [ModelScope](https://www.modelscope.cn/organization/01ai/) - [WiseModel](https://wisemodel.cn/organization/01.AI) #### Step 3: Perform inference You can perform inference with Yi chat or base models as below. ##### Perform inference with Yi chat model 1. Create a file named `quick_start.py` and copy the following content to it. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = '<your-model-path>' tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False) # Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM. model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ``` 2. Run `quick_start.py`. ```bash python quick_start.py ``` Then you can see an output similar to the one below. 🥳 ```bash Hello! How can I assist you today? ``` ##### Perform inference with Yi base model - Yi-34B The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model). You can use the existing file [`text_generation.py`](https://github.com/01-ai/Yi/tree/main/demo). ```bash python demo/text_generation.py --model <your-model-path> ``` Then you can see an output similar to the one below. 🥳 <details> <summary>Output. ⬇️ </summary> <br> **Prompt**: Let me tell you an interesting story about cat Tom and mouse Jerry, **Generation**: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up... </details> - Yi-9B Input ```bash from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_DIR = "01-ai/Yi-9B" model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, torch_dtype="auto") tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, use_fast=False) input_text = "# write the quick sort algorithm" inputs = tokenizer(input_text, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Output ```bash # write the quick sort algorithm def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) # test the quick sort algorithm print(quick_sort([3, 6, 8, 10, 1, 2, 1])) ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quick start - Docker <details> <summary> Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️</summary> <br>This tutorial guides you through every step of running <strong>Yi-34B-Chat on an A800 GPU</strong> or <strong>4*4090</strong> locally and then performing inference. <h4>Step 0: Prerequisites</h4> <p>Make sure you've installed <a href="https://docs.docker.com/engine/install/?open_in_browser=true">Docker</a> and <a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html">nvidia-container-toolkit</a>.</p> <h4> Step 1: Start Docker </h4> <pre><code>docker run -it --gpus all \ -v &lt;your-model-path&gt;: /models ghcr.io/01-ai/yi:latest </code></pre> <p>Alternatively, you can pull the Yi Docker image from <code>registry.lingyiwanwu.com/ci/01-ai/yi:latest</code>.</p> <h4>Step 2: Perform inference</h4> <p>You can perform inference with Yi chat or base models as below.</p> <h5>Perform inference with Yi chat model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-chat-model">pip - Perform inference with Yi chat model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>model_path = '&lt;your-model-mount-path&gt;'</code> instead of <code>model_path = '&lt;your-model-path&gt;'</code>.</p> <h5>Perform inference with Yi base model</h5> <p>The steps are similar to <a href="#perform-inference-with-yi-base-model">pip - Perform inference with Yi base model</a>.</p> <p><strong>Note</strong> that the only difference is to set <code>--model &lt;your-model-mount-path&gt;'</code> instead of <code>model &lt;your-model-path&gt;</code>.</p> </details> ### Quick start - conda-lock <details> <summary>You can use <code><a href="https://github.com/conda/conda-lock">conda-lock</a></code> to generate fully reproducible lock files for conda environments. ⬇️</summary> <br> You can refer to <a href="https://github.com/01-ai/Yi/blob/ebba23451d780f35e74a780987ad377553134f68/conda-lock.yml">conda-lock.yml</a> for the exact versions of the dependencies. Additionally, you can utilize <code><a href="https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html">micromamba</a></code> for installing these dependencies. <br> To install the dependencies, follow these steps: 1. Install micromamba by following the instructions available <a href="https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html">here</a>. 2. Execute <code>micromamba install -y -n yi -f conda-lock.yml</code> to create a conda environment named <code>yi</code> and install the necessary dependencies. </details> ### Quick start - llama.cpp <details> <summary> Run Yi-chat-6B-2bits locally with llama.cpp: a step-by-step guide. ⬇️</summary> <br>This tutorial guides you through every step of running a quantized model (<a href="https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main">Yi-chat-6B-2bits</a>) locally and then performing inference.</p> - [Step 0: Prerequisites](#step-0-prerequisites) - [Step 1: Download llama.cpp](#step-1-download-llamacpp) - [Step 2: Download Yi model](#step-2-download-yi-model) - [Step 3: Perform inference](#step-3-perform-inference) #### Step 0: Prerequisites - This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip. - Make sure [`git-lfs`](https://git-lfs.com/) is installed on your machine. #### Step 1: Download `llama.cpp` To clone the [`llama.cpp`](https://github.com/ggerganov/llama.cpp) repository, run the following command. ```bash git clone [email protected]:ggerganov/llama.cpp.git ``` #### Step 2: Download Yi model 2.1 To clone [XeIaso/yi-chat-6B-GGUF](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/tree/main) with just pointers, run the following command. ```bash GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/XeIaso/yi-chat-6B-GGUF ``` 2.2 To download a quantized Yi model ([yi-chat-6b.Q2_K.gguf](https://huggingface.co/XeIaso/yi-chat-6B-GGUF/blob/main/yi-chat-6b.Q2_K.gguf)), run the following command. ```bash git-lfs pull --include yi-chat-6b.Q2_K.gguf ``` #### Step 3: Perform inference To perform inference with the Yi model, you can use one of the following methods. - [Method 1: Perform inference in terminal](#method-1-perform-inference-in-terminal) - [Method 2: Perform inference in web](#method-2-perform-inference-in-web) ##### Method 1: Perform inference in terminal To compile `llama.cpp` using 4 threads and then conduct inference, navigate to the `llama.cpp` directory, and run the following command. > ##### Tips > > - Replace `/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf` with the actual path of your model. > > - By default, the model operates in completion mode. > > - For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run `./main -h` to check detailed descriptions and usage. ```bash make -j4 && ./main -m /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf -p "How do you feed your pet fox? Please answer this question in 6 simple steps:\nStep 1:" -n 384 -e ... How do you feed your pet fox? Please answer this question in 6 simple steps: Step 1: Select the appropriate food for your pet fox. You should choose high-quality, balanced prey items that are suitable for their unique dietary needs. These could include live or frozen mice, rats, pigeons, or other small mammals, as well as fresh fruits and vegetables. Step 2: Feed your pet fox once or twice a day, depending on the species and its individual preferences. Always ensure that they have access to fresh water throughout the day. Step 3: Provide an appropriate environment for your pet fox. Ensure it has a comfortable place to rest, plenty of space to move around, and opportunities to play and exercise. Step 4: Socialize your pet with other animals if possible. Interactions with other creatures can help them develop social skills and prevent boredom or stress. Step 5: Regularly check for signs of illness or discomfort in your fox. Be prepared to provide veterinary care as needed, especially for common issues such as parasites, dental health problems, or infections. Step 6: Educate yourself about the needs of your pet fox and be aware of any potential risks or concerns that could affect their well-being. Regularly consult with a veterinarian to ensure you are providing the best care. ... ``` Now you have successfully asked a question to the Yi model and got an answer! 🥳 ##### Method 2: Perform inference in web 1. To initialize a lightweight and swift chatbot, run the following command. ```bash cd llama.cpp ./server --ctx-size 2048 --host 0.0.0.0 --n-gpu-layers 64 --model /Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2_K.gguf ``` Then you can get an output like this: ```bash ... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 5000000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M2 Pro ggml_metal_init: picking default device: Apple M2 Pro ggml_metal_init: ggml.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: loading '/Users/yu/llama.cpp/ggml-metal.metal' ggml_metal_init: GPU name: Apple M2 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB ggml_metal_init: maxTransferRate = built-in GPU ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 128.00 MiB, ( 2629.44 / 10922.67) llama_new_context_with_model: KV self size = 128.00 MiB, K (f16): 64.00 MiB, V (f16): 64.00 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 2629.45 / 10922.67) llama_build_graph: non-view tensors processed: 676/676 llama_new_context_with_model: compute buffer total size = 159.19 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 156.02 MiB, ( 2785.45 / 10922.67) Available slots: -> Slot 0 - max context: 2048 llama server listening at http://0.0.0.0:8080 ``` 2. To access the chatbot interface, open your web browser and enter `http://0.0.0.0:8080` into the address bar. ![Yi model chatbot interface - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp1.png?raw=true) 3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer. ![Ask a question to Yi model - llama.cpp](https://github.com/01-ai/Yi/blob/main/assets/img/yi_llama_cpp2.png?raw=true) </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Web demo You can build a web UI demo for Yi **chat** models (note that Yi base models are not supported in this senario). [Step 1: Prepare your environment](#step-1-prepare-your-environment). [Step 2: Download the Yi model](#step-2-download-the-yi-model). Step 3. To start a web service locally, run the following command. ```bash python demo/web_demo.py -c <your-model-path> ``` You can access the web UI by entering the address provided in the console into your browser. ![Quick start - web demo](https://github.com/01-ai/Yi/blob/main/assets/img/yi_34b_chat_web_demo.gif?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Fine-tuning ```bash bash finetune/scripts/run_sft_Yi_6b.sh ``` Once finished, you can compare the finetuned model and the base model with the following command: ```bash bash finetune/scripts/run_eval.sh ``` <details style="display: inline;"><summary>For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ </summary> <ul> ### Finetune code for Yi 6B and 34B #### Preparation ##### From Image By default, we use a small dataset from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) to finetune the base model. You can also prepare your customized dataset in the following `jsonl` format: ```json { "prompt": "Human: Who are you? Assistant:", "chosen": "I'm Yi." } ``` And then mount them in the container to replace the default ones: ```bash docker run -it \ -v /path/to/save/finetuned/model/:/finetuned-model \ -v /path/to/train.jsonl:/yi/finetune/data/train.json \ -v /path/to/eval.jsonl:/yi/finetune/data/eval.json \ ghcr.io/01-ai/yi:latest \ bash finetune/scripts/run_sft_Yi_6b.sh ``` ##### From Local Server Make sure you have conda. If not, use ```bash mkdir -p ~/miniconda3 wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 rm -rf ~/miniconda3/miniconda.sh ~/miniconda3/bin/conda init bash source ~/.bashrc ``` Then, create a conda env: ```bash conda create -n dev_env python=3.10 -y conda activate dev_env pip install torch==2.0.1 deepspeed==0.10 tensorboard transformers datasets sentencepiece accelerate ray==2.7 ``` #### Hardware Setup For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended. For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA_VISIBLE_DEVICES to limit the number of GPUs (as shown in scripts/run_sft_Yi_34b.sh). A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA_VISIBLE_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB. #### Quick Start Download a LLM-base model to MODEL_PATH (6B and 34B). A typical folder of models is like: ```bash |-- $MODEL_PATH | |-- config.json | |-- pytorch_model-00001-of-00002.bin | |-- pytorch_model-00002-of-00002.bin | |-- pytorch_model.bin.index.json | |-- tokenizer_config.json | |-- tokenizer.model | |-- ... ``` Download a dataset from huggingface to local storage DATA_PATH, e.g. Dahoas/rm-static. ```bash |-- $DATA_PATH | |-- data | | |-- train-00000-of-00001-2a1df75c6bce91ab.parquet | | |-- test-00000-of-00001-8c7c51afc6d45980.parquet | |-- dataset_infos.json | |-- README.md ``` `finetune/yi_example_dataset` has example datasets, which are modified from [BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG) ```bash |-- $DATA_PATH |--data |-- train.jsonl |-- eval.jsonl ``` `cd` into the scripts folder, copy and paste the script, and run. For example: ```bash cd finetune/scripts bash run_sft_Yi_6b.sh ``` For the Yi-6B base model, setting training_debug_steps=20 and num_train_epochs=4 can output a chat model, which takes about 20 minutes. For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient. #### Evaluation ```bash cd finetune/scripts bash run_eval.sh ``` Then you'll see the answer from both the base model and the finetuned model. </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Quantization #### GPT-Q ```bash python quantization/gptq/quant_autogptq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/gptq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### GPT-Q quantization [GPT-Q](https://github.com/IST-DASLab/gptq) is a PTQ (Post-Training Quantization) method. It saves memory and provides potential speedups while retaining the accuracy of the model. Yi models can be GPT-Q quantized without a lot of efforts. We provide a step-by-step tutorial below. To run GPT-Q, we will use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) and [exllama](https://github.com/turboderp/exllama). And the huggingface transformers has integrated optimum and auto-gptq to perform GPTQ quantization on language models. ##### Do Quantization The `quant_autogptq.py` script is provided for you to perform GPT-Q quantization: ```bash python quant_autogptq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> #### AWQ ```bash python quantization/awq/quant_autoawq.py \ --model /base_model \ --output_dir /quantized_model \ --trust_remote_code ``` Once finished, you can then evaluate the resulting model as follows: ```bash python quantization/awq/eval_quantized_model.py \ --model /quantized_model \ --trust_remote_code ``` <details style="display: inline;"><summary>For details, see the explanations below. ⬇️</summary> <ul> #### AWQ quantization [AWQ](https://github.com/mit-han-lab/llm-awq) is a PTQ (Post-Training Quantization) method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs. Yi models can be AWQ quantized without a lot of efforts. We provide a step-by-step tutorial below. To run AWQ, we will use [AutoAWQ](https://github.com/casper-hansen/AutoAWQ). ##### Do Quantization The `quant_autoawq.py` script is provided for you to perform AWQ quantization: ```bash python quant_autoawq.py --model /base_model \ --output_dir /quantized_model --bits 4 --group_size 128 --trust_remote_code ``` ##### Run Quantized Model You can run a quantized model using the `eval_quantized_model.py`: ```bash python eval_quantized_model.py --model /quantized_model --trust_remote_code ``` </ul> </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Deployment If you want to deploy Yi models, make sure you meet the software and hardware requirements. #### Software requirements Before using Yi quantized models, make sure you've installed the correct software listed below. | Model | Software |---|--- Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi) Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation) #### Hardware requirements Before deploying Yi in your environment, make sure your hardware meets the following requirements. ##### Chat models | Model | Minimum VRAM | Recommended GPU Example | |:----------------------|:--------------|:-------------------------------------:| | Yi-6B-Chat | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-Chat-4bits | 4 GB | 1 x RTX 3060 (12 GB)<br> 1 x RTX 4060 (8 GB) | | Yi-6B-Chat-8bits | 8 GB | 1 x RTX 3070 (8 GB) <br> 1 x RTX 4060 (8 GB) | | Yi-34B-Chat | 72 GB | 4 x RTX 4090 (24 GB)<br> 1 x A800 (80GB) | | Yi-34B-Chat-4bits | 20 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) <br> 1 x A100 (40 GB) | | Yi-34B-Chat-8bits | 38 GB | 2 x RTX 3090 (24 GB) <br> 2 x RTX 4090 (24 GB)<br> 1 x A800 (40 GB) | Below are detailed minimum VRAM requirements under different batch use cases. | Model | batch=1 | batch=4 | batch=16 | batch=32 | | ----------------------- | ------- | ------- | -------- | -------- | | Yi-6B-Chat | 12 GB | 13 GB | 15 GB | 18 GB | | Yi-6B-Chat-4bits | 4 GB | 5 GB | 7 GB | 10 GB | | Yi-6B-Chat-8bits | 7 GB | 8 GB | 10 GB | 14 GB | | Yi-34B-Chat | 65 GB | 68 GB | 76 GB | > 80 GB | | Yi-34B-Chat-4bits | 19 GB | 20 GB | 30 GB | 40 GB | | Yi-34B-Chat-8bits | 35 GB | 37 GB | 46 GB | 58 GB | ##### Base models | Model | Minimum VRAM | Recommended GPU Example | |----------------------|--------------|:-------------------------------------:| | Yi-6B | 15 GB | 1 x RTX 3090 (24 GB) <br> 1 x RTX 4090 (24 GB) <br> 1 x A10 (24 GB) <br> 1 x A30 (24 GB) | | Yi-6B-200K | 50 GB | 1 x A800 (80 GB) | | Yi-9B | 20 GB | 1 x RTX 4090 (24 GB) | | Yi-34B | 72 GB | 4 x RTX 4090 (24 GB) <br> 1 x A800 (80 GB) | | Yi-34B-200K | 200 GB | 4 x A800 (80 GB) | <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Learning hub <details> <summary> If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️</summary> <br> Welcome to the Yi learning hub! Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more. The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions! At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below. With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳 #### Tutorials ##### English tutorials | Type | Deliverable | Date | Author | |-------------|--------------------------------------------------------|----------------|----------------| | Video | [Run dolphin-2.2-yi-34b on IoT Devices](https://www.youtube.com/watch?v=NJ89T5mO25Y) | 2023-11-30 | [Second State](https://github.com/second-state) | | Blog | [Running Yi-34B-Chat locally using LlamaEdge](https://www.secondstate.io/articles/yi-34b/) | 2023-11-30 | [Second State](https://github.com/second-state) | | Video | [Install Yi 34B Locally - Chinese English Bilingual LLM](https://www.youtube.com/watch?v=CVQvj4Wrh4w&t=476s) | 2023-11-05 | [Fahd Mirza](https://www.youtube.com/@fahdmirza) | | Video | [Dolphin Yi 34b - Brand New Foundational Model TESTED](https://www.youtube.com/watch?v=On3Zuv27V3k&t=85s) | 2023-11-27 | [Matthew Berman](https://www.youtube.com/@matthew_berman) | ##### Chinese tutorials | Type | Deliverable | Date | Author | |-------------|--------------------------------------------------------|----------------|----------------| | Blog | [实测零一万物Yi-VL多模态语言模型:能准确“识图吃瓜”](https://mp.weixin.qq.com/s/fu4O9XvJ03JhimsEyI-SsQ) | 2024-02-02 | [苏洋](https://github.com/soulteary) | | Blog | [本地运行零一万物 34B 大模型,使用 Llama.cpp & 21G 显存](https://zhuanlan.zhihu.com/p/668921042) | 2023-11-26 | [苏洋](https://github.com/soulteary) | | Blog | [零一万物模型折腾笔记:官方 Yi-34B 模型基础使用](https://zhuanlan.zhihu.com/p/671387298) | 2023-12-10 | [苏洋](https://github.com/soulteary) | | Blog | [CPU 混合推理,非常见大模型量化方案:“二三五六” 位量化方案](https://zhuanlan.zhihu.com/p/671698216) | 2023-12-12 | [苏洋](https://github.com/soulteary) | | Blog | [单卡 3 小时训练 Yi-6B 大模型 Agent:基于 Llama Factory 实战](https://zhuanlan.zhihu.com/p/678989191) | 2024-01-22 | [郑耀威](https://github.com/hiyouga) | | Blog | [零一万物开源Yi-VL多模态大模型,魔搭社区推理&微调最佳实践来啦!](https://zhuanlan.zhihu.com/p/680098411) | 2024-01-26 | [ModelScope](https://github.com/modelscope) | | Video | [只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型](https://www.bilibili.com/video/BV17t4y1f7Ee/) | 2023-12-28 | [漆妮妮](https://space.bilibili.com/1262370256) | | Video | [Yi-VL-34B 多模态大模型 - 用两张 A40 显卡跑起来](https://www.bilibili.com/video/BV1Q5411y7AG/) | 2023-01-28 | [漆妮妮](https://space.bilibili.com/1262370256) | </details> # Why Yi? - [Ecosystem](#ecosystem) - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) - [Benchmarks](#benchmarks) - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) - [Yi-34B and Yi-34B-200K](#yi-34b-and-yi-34b-200k) - [Yi-9B](#yi-9b) ## Ecosystem Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity. - [Upstream](#upstream) - [Downstream](#downstream) - [Serving](#serving) - [Quantization](#quantization-1) - [Fine-tuning](#fine-tuning-1) - [API](#api) ### Upstream The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency. For example, the Yi series models are saved in the format of the Llama model. You can directly use `LlamaForCausalLM` and `LlamaTokenizer` to load the model. For more information, see [Use the chat model](#31-use-the-chat-model). ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34b", use_fast=False) model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34b", device_map="auto") ``` <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Downstream > 💡 Tip > > - Feel free to create a PR and share the fantastic work you've built using the Yi series models. > > - To help others quickly understand your work, it is recommended to use the format of `<model-name>: <model-intro> + <model-highlights>`. #### Serving If you want to get up with Yi in a few minutes, you can use the following services built upon Yi. - Yi-34B-Chat: you can chat with Yi using one of the following platforms: - [Yi-34B-Chat | Hugging Face](https://huggingface.co/spaces/01-ai/Yi-34B-Chat) - [Yi-34B-Chat | Yi Platform](https://platform.lingyiwanwu.com/): **Note** that currently it's available through a whitelist. Welcome to apply (fill out a form in [English](https://cn.mikecrm.com/l91ODJf) or [Chinese](https://cn.mikecrm.com/gnEZjiQ)) and experience it firsthand! - [Yi-6B-Chat (Replicate)](https://replicate.com/01-ai): you can use this model with more options by setting additional parameters and calling APIs. - [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): you can use this service to run Yi models locally with added flexibility and customization. #### Quantization If you have limited computational capabilities, you can use Yi's quantized models as follows. These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage. - [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ) - [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF) - [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ) #### Fine-tuning If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below. - [TheBloke Models](https://huggingface.co/TheBloke): this site hosts numerous fine-tuned models derived from various LLMs including Yi. This is not an exhaustive list for Yi, but to name a few sorted on downloads: - [TheBloke/dolphin-2_2-yi-34b-AWQ](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ) - [TheBloke/Yi-34B-Chat-AWQ](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ) - [TheBloke/Yi-34B-Chat-GPTQ](https://huggingface.co/TheBloke/Yi-34B-Chat-GPTQ) - [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). - [OrionStarAI/OrionStar-Yi-34B-Chat-Llama](https://huggingface.co/OrionStarAI/OrionStar-Yi-34B-Chat-Llama): this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the [OpenCompass LLM Leaderboard](https://opencompass.org.cn/leaderboard-llm). - [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B): this model is trained with 200K context length and 3 epochs on the Capybara dataset. #### API - [amazing-openai-api](https://github.com/soulteary/amazing-openai-api): this tool converts Yi model APIs into the OpenAI API format out of the box. - [LlamaEdge](https://www.secondstate.io/articles/yi-34b/#create-an-openai-compatible-api-service-for-the-yi-34b-chat-model): this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ## Tech report For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652). ### Citation ``` @misc{ai2024yi, title={Yi: Open Foundation Models by 01.AI}, author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai}, year={2024}, eprint={2403.04652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Benchmarks - [Chat model performance](#chat-model-performance) - [Base model performance](#base-model-performance) ### Chat model performance Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more. ![Chat model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_chat.png?raw=true) <details> <summary> Evaluation methods and challenges. ⬇️ </summary> - **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA. - **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed. - **Evaluation strategy**: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text. - **Challenges faced**: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results. <strong>*</strong>: C-Eval results are evaluated on the validation datasets </details> ### Base model performance #### Yi-34B and Yi-34B-200K The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more. ![Base model performance](https://github.com/01-ai/Yi/blob/main/assets/img/benchmark_base.png?raw=true) <details> <summary> Evaluation methods. ⬇️</summary> - **Disparity in results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass. - **Investigation findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences. - **Uniform benchmarking process**: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content. - **Efforts to retrieve unreported scores**: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. - **Extensive model evaluation**: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. - **Special configurations**: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". - **Falcon-180B caveat**: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated. </details> #### Yi-9B Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. ![Yi-9B benchmark - details](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_details.png?raw=true) - In terms of **overall** ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - overall](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_overall.png?raw=true) - In terms of **coding** ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - code](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_code.png?raw=true) - In terms of **math** ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B. ![Yi-9B benchmark - math](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_math.png?raw=true) - In terms of **common sense and reasoning** ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B. ![Yi-9B benchmark - text](https://github.com/01-ai/Yi/blob/main/assets/img/Yi-9B_benchmark_text.png?raw=true) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Who can use Yi? Everyone! 🙌 ✅ - The Yi series models are free for personal usage, academic purposes, and commercial use. All usage must adhere to the [Yi Series Models Community License Agreement 2.1](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt) - For free commercial use, you only need to [complete this form](https://www.lingyiwanwu.com/yi-license) to get a Yi Model Commercial License. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> # Misc. ### Acknowledgments A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation. [![yi contributors](https://contrib.rocks/image?repo=01-ai/yi&max=2000&columns=15)](https://github.com/01-ai/yi/graphs/contributors) <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### Disclaimer We use data compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct, and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p> ### License The source code in this repo is licensed under the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE). The Yi series models are fully open for academic research and free for commercial use, with automatic permission granted upon application. All usage must adhere to the [Yi Series Models Community License Agreement 2.1](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt). For free commercial use, you only need to send an email to [get official commercial permission](https://www.lingyiwanwu.com/yi-license). <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p>
{}
RichardErkhov/01-ai_-_Yi-6B-200K-gguf
null
[ "gguf", "arxiv:2403.04652", "arxiv:2311.16502", "arxiv:2401.11944", "region:us" ]
null
2024-04-13T01:49:59+00:00
[ "2403.04652", "2311.16502", "2401.11944" ]
[]
TAGS #gguf #arxiv-2403.04652 #arxiv-2311.16502 #arxiv-2401.11944 #region-us
GGUF quantization made by Richard Erkhov. Github Discord Request more models Yi-6B-200K - GGUF * Model creator: URL * Original model: URL Name: Yi-6B-200K.Q2\_K.gguf, Quant method: Q2\_K, Size: 2.18GB Name: Yi-6B-200K.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 2.41GB Name: Yi-6B-200K.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 2.53GB Name: Yi-6B-200K.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 2.52GB Name: Yi-6B-200K.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 2.62GB Name: Yi-6B-200K.Q3\_K.gguf, Quant method: Q3\_K, Size: 2.79GB Name: Yi-6B-200K.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 2.79GB Name: Yi-6B-200K.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 3.01GB Name: Yi-6B-200K.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 3.11GB Name: Yi-6B-200K.Q4\_0.gguf, Quant method: Q4\_0, Size: 3.24GB Name: Yi-6B-200K.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 3.27GB Name: Yi-6B-200K.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 3.26GB Name: Yi-6B-200K.Q4\_K.gguf, Quant method: Q4\_K, Size: 3.42GB Name: Yi-6B-200K.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 3.42GB Name: Yi-6B-200K.Q4\_1.gguf, Quant method: Q4\_1, Size: 3.58GB Name: Yi-6B-200K.Q5\_0.gguf, Quant method: Q5\_0, Size: 3.92GB Name: Yi-6B-200K.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 3.92GB Name: Yi-6B-200K.Q5\_K.gguf, Quant method: Q5\_K, Size: 4.01GB Name: Yi-6B-200K.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 4.01GB Name: Yi-6B-200K.Q5\_1.gguf, Quant method: Q5\_1, Size: 4.25GB Name: Yi-6B-200K.Q6\_K.gguf, Quant method: Q6\_K, Size: 4.63GB ``` Original model description: --- ``` license: other license\_name: yi-license license\_link: LICENSE widget: * example\_title: "Yi-34B-Chat" text: "hi" output: text: " Hello! How can I assist you today?" * example\_title: "Yi-34B" text: "There's a place where time stands still. A place of breath taking wonder, but also" output: text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipeline\_tag: text-generation --- ![specify theme context for images](URL </picture> </br> </br> <div style=) [![](URL️-yi@URL-FFE01B)](mailto:oss@URL) ### Building the Next Generation of Open-Source and Bilingual LLMs [Hugging Face](URL target=) • [ModelScope](URL target=) • ️ [WiseModel](URL target=) ‍ Ask questions or discuss ideas on [GitHub](01-ai/Yi · Discussions) Join us on [Discord](URL target=) or [WeChat](有官方的微信群嘛 · Issue #43 · 01-ai/Yi) Check out [Grow at [Yi Learning Hub](#learning-hub)](URL Yi Tech Report </a> </p> <p align=) --- Table of Contents * What is Yi? + Introduction + Models - Chat models - Base models - Model info + News * How to use Yi? + Quick start - Choose your path - pip - docker - URL - conda-lock - Web demo + Fine-tuning + Quantization + Deployment + Learning hub * Why Yi? + Ecosystem - Upstream - Downstream * Serving * Quantization * Fine-tuning * API + Benchmarks - Base model performance - Chat model performance + Tech report - Citation * Who can use Yi? * Misc. + Acknowledgements + Disclaimer + License --- What is Yi? =========== Introduction ------------ * The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI. * Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example, * Yi-34B-Chat model landed in second place (following GPT-4 Turbo), outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024). * Yi-34B model ranked first among all existing open-source models (such as Falcon-180B, Llama-70B, Claude) in both English and Chinese on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023). * (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem. If you're interested in Yi's adoption of Llama architecture and license usage policy, see Yi's relation with Llama. ⬇️ > > TL;DR > > > The Yi series models adopt the same model architecture as Llama but are NOT derivatives of Llama. > > > + Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018. + Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi. + Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems. + However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights. - As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure. - Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the Alpaca Leaderboard in Dec 2023. [ [Back to top ⬆️](#top) ] News ---- **2024-03-16**: The `Yi-9B-200K` is open-sourced and available to the public. **2024-03-08**: [**2024-03-06**: The `Yi-9B` is open-sourced and available to the public. `Yi-9B` stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. **2024-01-23**: The Yi-VL models, `[`[Chat models](URL (based on data available up to January 2024).</li> </details> <details> <summary> <b>2023-11-23</b>: <a href=) are open-sourced and available to the public.`](URL and <code><a href=)` This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ. * 'Yi-34B-Chat' * 'Yi-34B-Chat-4bits' * 'Yi-34B-Chat-8bits' * 'Yi-6B-Chat' * 'Yi-6B-Chat-4bits' * 'Yi-6B-Chat-8bits' You can try some of them interactively at: * Hugging Face * Replicate **2023-11-23**: The Yi Series Models Community License Agreement is updated to [The base models,](URL </details> <details> <summary> <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary> <br>Application form: <ul> <li>English</li> <li>Chinese</li> </ul> </details> <details> <summary> <b>2023-11-05</b>: <a href=) `Yi-6B-200K` and `Yi-34B-200K`, are open-sourced and available to the public. This release contains two base models with the same parameter sizes as the previous release, except that the context window is extended to 200K. **2023-11-02**: [The base models,](#base-models) `Yi-6B` and `Yi-34B`, are open-sourced and available to the public. The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time. [ [Back to top ⬆️](#top) ] Models ------ Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements. If you want to deploy Yi models, make sure you meet the software and hardware requirements. ### Chat models - 4-bit series models are quantized by AWQ. - 8-bit series models are quantized by GPTQ - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). ### Base models - 200k is roughly equivalent to 400,000 Chinese characters. - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run 'git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf' to download the weight. ### Model info * For chat and base models Model: 9B series models, Intro: It is the best at coding and math in the Yi series models., Default context window: Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens. Model: 34B series models, Intro: They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It's a cost-effective solution that's affordable and equipped with emergent ability., Default context window: 3T * For chat models For chat model limitations, see the explanations below. ⬇️ The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training. However, this higher diversity might amplify certain existing issues, including: + Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning. + Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions. + Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc. + To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top\_p, or top\_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.](URL Tech Report</a> is published! </summary> </details> <details open> <summary> <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary> <br> In the ) [ [Back to top ⬆️](#top) ] How to use Yi? ============== * Quick start + Choose your path + pip + docker + conda-lock + URL + Web demo * Fine-tuning * Quantization * Deployment * Learning hub Quick start ----------- Getting up and running with Yi models is simple with multiple choices available. ### Choose your path Select one of the following paths to begin your journey with Yi! !Quick start - Choose your path #### Deploy Yi locally If you prefer to deploy Yi models locally, * ‍️ and you have sufficient resources (for example, NVIDIA A800 80GB), you can choose one of the following methods: + pip + Docker + conda-lock * ‍️ and you have limited resources (for example, a MacBook Pro), you can use URL. #### Not to deploy Yi locally If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options. ##### ‍️ Run Yi with APIs If you want to explore more features of Yi, you can adopt one of these methods: * Yi APIs (Yi official) + Early access has been granted to some applicants. Stay tuned for the next round of access! * Yi APIs (Replicate) ##### ‍️ Run Yi in playground If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options: * Yi-34B-Chat-Playground (Yi official) + Access is available through a whitelist. Welcome to apply (fill out a form in English or Chinese). * Yi-34B-Chat-Playground (Replicate) ##### ‍️ Chat with Yi If you want to chat with Yi, you can use one of these online services, which offer a similar user experience: * Yi-34B-Chat (Yi official on Hugging Face) + No registration is required. * Yi-34B-Chat (Yi official beta) + Access is available through a whitelist. Welcome to apply (fill out a form in English or Chinese). [ [Back to top ⬆️](#top) ] ### Quick start - pip This tutorial guides you through every step of running Yi-34B-Chat locally on an A800 (80G) and then performing inference. #### Step 0: Prerequisites * Make sure Python 3.10 or a later version is installed. * If you want to run other Yi models, see software and hardware requirements. #### Step 1: Prepare your environment To set up the environment and install the required packages, execute the following command. #### Step 2: Download the Yi model You can download the weights and tokenizer of Yi models from the following sources: * Hugging Face * ModelScope * WiseModel #### Step 3: Perform inference You can perform inference with Yi chat or base models as below. ##### Perform inference with Yi chat model 1. Create a file named 'quick\_start.py' and copy the following content to it. 2. Run 'quick\_start.py'. Then you can see an output similar to the one below. ##### Perform inference with Yi base model * Yi-34B The steps are similar to pip - Perform inference with Yi chat model. You can use the existing file 'text\_generation.py'. Then you can see an output similar to the one below. Output. ⬇️ Prompt: Let me tell you an interesting story about cat Tom and mouse Jerry, Generation: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up... * Yi-9B Input Output [ [Back to top ⬆️](#top) ] ### Quick start - Docker Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️ This tutorial guides you through every step of running **Yi-34B-Chat on an A800 GPU** or **4\*4090** locally and then performing inference. #### Step 0: Prerequisites Make sure you've installed [Step 1: Start Docker ``` docker run -it --gpus all \ -v <your-model-path>: /models URL ``` Alternatively, you can pull the Yi Docker image from `URL #### Step 2: Perform inference You can perform inference with Yi chat or base models as below. ##### Perform inference with Yi chat model The steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model). **Note** that the only difference is to set `model_path = '<your-model-mount-path>'` instead of `model_path = '<your-model-path>'`. ##### Perform inference with Yi base model The steps are similar to [pip - Perform inference with Yi base model](#perform-inference-with-yi-base-model). **Note** that the only difference is to set `--model <your-model-mount-path>'` instead of `model <your-model-path>`.`](URL and <a href=) ### Quick start - conda-lock You can use `[[* Step 0: Prerequisites * Step 1: Download URL * Step 2: Download Yi model * Step 3: Perform inference #### Step 0: Prerequisites * This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip. * Make sure 'git-lfs' is installed on your machine. #### Step 1: Download 'URL' To clone the 'URL' repository, run the following command. #### Step 2: Download Yi model 2.1 To clone XeIaso/yi-chat-6B-GGUF with just pointers, run the following command. 2.2 To download a quantized Yi model (yi-chat-6b.Q2\_K.gguf), run the following command. #### Step 3: Perform inference To perform inference with the Yi model, you can use one of the following methods. * Method 1: Perform inference in terminal * Method 2: Perform inference in web ##### Method 1: Perform inference in terminal To compile 'URL' using 4 threads and then conduct inference, navigate to the 'URL' directory, and run the following command. > > ##### Tips > > > * Replace '/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2\_K.gguf' with the actual path of your model. > * By default, the model operates in completion mode. > * For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run './main -h' to check detailed descriptions and usage. > > > Now you have successfully asked a question to the Yi model and got an answer! ##### Method 2: Perform inference in web 1. To initialize a lightweight and swift chatbot, run the following command. Then you can get an output like this: 2. To access the chatbot interface, open your web browser and enter 'http://0.0.0.0:8080' into the address bar. !Yi model chatbot interface - URL 3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer. !Ask a question to Yi model - URL](URL for installing these dependencies. <br> To install the dependencies, follow these steps: <ol> <li> <p>Install micromamba by following the instructions available <a href="URL</p> </li> <li> <p>Execute <code>micromamba install -y -n yi -f URL</code> to create a conda environment named <code>yi</code> and install the necessary dependencies.</p> </li> </ol> </details> <h3>Quick start - URL</h3> <details> <summary> Run Yi-chat-6B-2bits locally with URL: a step-by-step guide. ⬇️</summary> <br>This tutorial guides you through every step of running a quantized model (<a href=)](URL to generate fully reproducible lock files for conda environments. ⬇️</summary> <br> You can refer to <a href=)` [ [Back to top ⬆️](#top) ] ### Web demo You can build a web UI demo for Yi chat models (note that Yi base models are not supported in this senario). Step 1: Prepare your environment. Step 2: Download the Yi model. Step 3. To start a web service locally, run the following command. You can access the web UI by entering the address provided in the console into your browser. !Quick start - web demo [ [Back to top ⬆️](#top) ] ### Fine-tuning Once finished, you can compare the finetuned model and the base model with the following command: For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ ### Finetune code for Yi 6B and 34B #### Preparation ##### From Image By default, we use a small dataset from BAAI/COIG to finetune the base model. You can also prepare your customized dataset in the following 'jsonl' format: And then mount them in the container to replace the default ones: ##### From Local Server Make sure you have conda. If not, use Then, create a conda env: #### Hardware Setup For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended. For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA\_VISIBLE\_DEVICES to limit the number of GPUs (as shown in scripts/run\_sft\_Yi\_34b.sh). A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA\_VISIBLE\_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB. #### Quick Start Download a LLM-base model to MODEL\_PATH (6B and 34B). A typical folder of models is like: Download a dataset from huggingface to local storage DATA\_PATH, e.g. Dahoas/rm-static. 'finetune/yi\_example\_dataset' has example datasets, which are modified from BAAI/COIG 'cd' into the scripts folder, copy and paste the script, and run. For example: For the Yi-6B base model, setting training\_debug\_steps=20 and num\_train\_epochs=4 can output a chat model, which takes about 20 minutes. For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient. #### Evaluation Then you'll see the answer from both the base model and the finetuned model. [ [Back to top ⬆️](#top) ] ### Quantization #### GPT-Q Once finished, you can then evaluate the resulting model as follows: For details, see the explanations below. ⬇️ #### GPT-Q quantization GPT-Q is a PTQ (Post-Training Quantization) method. It saves memory and provides potential speedups while retaining the accuracy of the model. Yi models can be GPT-Q quantized without a lot of efforts. We provide a step-by-step tutorial below. To run GPT-Q, we will use AutoGPTQ and exllama. And the huggingface transformers has integrated optimum and auto-gptq to perform GPTQ quantization on language models. ##### Do Quantization The 'quant\_autogptq.py' script is provided for you to perform GPT-Q quantization: ##### Run Quantized Model You can run a quantized model using the 'eval\_quantized\_model.py': #### AWQ Once finished, you can then evaluate the resulting model as follows: For details, see the explanations below. ⬇️ #### AWQ quantization AWQ is a PTQ (Post-Training Quantization) method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs. Yi models can be AWQ quantized without a lot of efforts. We provide a step-by-step tutorial below. To run AWQ, we will use AutoAWQ. ##### Do Quantization The 'quant\_autoawq.py' script is provided for you to perform AWQ quantization: ##### Run Quantized Model You can run a quantized model using the 'eval\_quantized\_model.py': [ [Back to top ⬆️](#top) ] ### Deployment If you want to deploy Yi models, make sure you meet the software and hardware requirements. #### Software requirements Before using Yi quantized models, make sure you've installed the correct software listed below. #### Hardware requirements Before deploying Yi in your environment, make sure your hardware meets the following requirements. ##### Chat models Below are detailed minimum VRAM requirements under different batch use cases. ##### Base models [ [Back to top ⬆️](#top) ] ### Learning hub If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️ Welcome to the Yi learning hub! Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more. The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions! At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below. With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! #### Tutorials ##### English tutorials ##### Chinese tutorials Why Yi? ======= * Ecosystem + Upstream + Downstream - Serving - Quantization - Fine-tuning - API * Benchmarks + Chat model performance + Base model performance - Yi-34B and Yi-34B-200K - Yi-9B Ecosystem --------- Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity. * Upstream * Downstream + Serving + Quantization + Fine-tuning + API ### Upstream The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency. For example, the Yi series models are saved in the format of the Llama model. You can directly use 'LlamaForCausalLM' and 'LlamaTokenizer' to load the model. For more information, see Use the chat model. [ [Back to top ⬆️](#top) ] ### Downstream > > Tip > > > * Feel free to create a PR and share the fantastic work you've built using the Yi series models. > * To help others quickly understand your work, it is recommended to use the format of ': + '. > > > #### Serving If you want to get up with Yi in a few minutes, you can use the following services built upon Yi. * Yi-34B-Chat: you can chat with Yi using one of the following platforms: + Yi-34B-Chat | Hugging Face + Yi-34B-Chat | Yi Platform: Note that currently it's available through a whitelist. Welcome to apply (fill out a form in English or Chinese) and experience it firsthand! * Yi-6B-Chat (Replicate): you can use this model with more options by setting additional parameters and calling APIs. * ScaleLLM: you can use this service to run Yi models locally with added flexibility and customization. #### Quantization If you have limited computational capabilities, you can use Yi's quantized models as follows. These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage. * TheBloke/Yi-34B-GPTQ * TheBloke/Yi-34B-GGUF * TheBloke/Yi-34B-AWQ #### Fine-tuning If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below. * TheBloke Models: this site hosts numerous fine-tuned models derived from various LLMs including Yi. This is not an exhaustive list for Yi, but to name a few sorted on downloads: + TheBloke/dolphin-2\_2-yi-34b-AWQ + TheBloke/Yi-34B-Chat-AWQ + TheBloke/Yi-34B-Chat-GPTQ * SUSTech/SUS-Chat-34B: this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the Open LLM Leaderboard. * OrionStarAI/OrionStar-Yi-34B-Chat-Llama: this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the OpenCompass LLM Leaderboard. * NousResearch/Nous-Capybara-34B: this model is trained with 200K context length and 3 epochs on the Capybara dataset. #### API * amazing-openai-api: this tool converts Yi model APIs into the OpenAI API format out of the box. * LlamaEdge: this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust. [ [Back to top ⬆️](#top) ] Tech report ----------- For detailed capabilities of the Yi series model, see Yi: Open Foundation Models by 01.AI. Benchmarks ---------- * Chat model performance * Base model performance ### Chat model performance Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more. !Chat model performance Evaluation methods and challenges. ⬇️ * Evaluation methods: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA. * Zero-shot vs. few-shot: in chat models, the zero-shot approach is more commonly employed. * Evaluation strategy: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text. * Challenges faced: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results. **\***: C-Eval results are evaluated on the validation datasets ### Base model performance #### Yi-34B and Yi-34B-200K The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more. !Base model performance Evaluation methods. ⬇️ * Disparity in results: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass. * Investigation findings: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences. * Uniform benchmarking process: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content. * Efforts to retrieve unreported scores: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. * Extensive model evaluation: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. * Special configurations: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". * Falcon-180B caveat: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated. #### Yi-9B Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. !Yi-9B benchmark - details * In terms of overall ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B. !Yi-9B benchmark - overall * In terms of coding ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B. !Yi-9B benchmark - code * In terms of math ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B. !Yi-9B benchmark - math * In terms of common sense and reasoning ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B. !Yi-9B benchmark - text [ [Back to top ⬆️](#top) ] Who can use Yi? =============== Everyone! * The Yi series models are free for personal usage, academic purposes, and commercial use. All usage must adhere to the Yi Series Models Community License Agreement 2.1 * For free commercial use, you only need to complete this form to get a Yi Model Commercial License. [ [Back to top ⬆️](#top) ] Misc. ===== ### Acknowledgments A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation. ![yi contributors](URL [ [Back to top ⬆️](#top) ] ### Disclaimer We use data compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct, and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. [ [Back to top ⬆️](#top) ] ### License The source code in this repo is licensed under the Apache 2.0 license. The Yi series models are fully open for academic research and free for commercial use, with automatic permission granted upon application. All usage must adhere to the Yi Series Models Community License Agreement 2.1. For free commercial use, you only need to send an email to get official commercial permission. [ [Back to top ⬆️](#top) ]
[ "### Building the Next Generation of Open-Source and Bilingual LLMs\n\n\n\n\n[Hugging Face](URL target=) • [ModelScope](URL target=) • ️ [WiseModel](URL target=)\n\n\n\n\n ‍ Ask questions or discuss ideas on [GitHub](01-ai/Yi · Discussions) \n\n\n\n\n Join us on [Discord](URL target=) or [WeChat](有官方的微信群嘛 · Issue #43 · 01-ai/Yi) \n\n\n\n\n Check out [Grow at [Yi Learning Hub](#learning-hub)](URL Yi Tech Report </a>\n</p> \n<p align=)\n\n\n\n\n---\n\n\n\n Table of Contents\n* What is Yi?\n\t+ Introduction\n\t+ Models\n\t\t- Chat models\n\t\t- Base models\n\t\t- Model info\n\t+ News\n* How to use Yi?\n\t+ Quick start\n\t\t- Choose your path\n\t\t- pip\n\t\t- docker\n\t\t- URL\n\t\t- conda-lock\n\t\t- Web demo\n\t+ Fine-tuning\n\t+ Quantization\n\t+ Deployment\n\t+ Learning hub\n* Why Yi?\n\t+ Ecosystem\n\t\t- Upstream\n\t\t- Downstream\n\t\t\t* Serving\n\t\t\t* Quantization\n\t\t\t* Fine-tuning\n\t\t\t* API\n\t+ Benchmarks\n\t\t- Base model performance\n\t\t- Chat model performance\n\t+ Tech report\n\t\t- Citation\n* Who can use Yi?\n* Misc.\n\t+ Acknowledgements\n\t+ Disclaimer\n\t+ License\n\n\n\n\n\n---\n\n\nWhat is Yi?\n===========\n\n\nIntroduction\n------------\n\n\n* The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI.\n* Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,\n* Yi-34B-Chat model landed in second place (following GPT-4 Turbo), outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024).\n* Yi-34B model ranked first among all existing open-source models (such as Falcon-180B, Llama-70B, Claude) in both English and Chinese on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023).\n* (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem.\n\n\n If you're interested in Yi's adoption of Llama architecture and license usage policy, see Yi's relation with Llama. ⬇️ \n\n\n> \n> TL;DR\n> \n> \n> The Yi series models adopt the same model architecture as Llama but are NOT derivatives of Llama.\n> \n> \n> \n\n+ Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018.\n+ Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi.\n+ Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems.\n+ However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights.\n\n\n\t- As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure.\n\t- Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the Alpaca Leaderboard in Dec 2023.\n\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nNews\n----\n\n\n\n **2024-03-16**: The `Yi-9B-200K` is open-sourced and available to the public.\n\n\n **2024-03-08**: [**2024-03-06**: The `Yi-9B` is open-sourced and available to the public.\n \n\n`Yi-9B` stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.\n\n\n **2024-01-23**: The Yi-VL models, `[`[Chat models](URL (based on data available up to January 2024).</li>\n</details>\n<details>\n<summary> <b>2023-11-23</b>: <a href=) are open-sourced and available to the public.`](URL and <code><a href=)`\n \nThis release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ.\n* 'Yi-34B-Chat'\n* 'Yi-34B-Chat-4bits'\n* 'Yi-34B-Chat-8bits'\n* 'Yi-6B-Chat'\n* 'Yi-6B-Chat-4bits'\n* 'Yi-6B-Chat-8bits'\n\n\nYou can try some of them interactively at:\n\n\n* Hugging Face\n* Replicate\n\n\n\n\n **2023-11-23**: The Yi Series Models Community License Agreement is updated to [The base models,](URL\n</details>\n<details> \n<summary> <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary>\n<br>Application form:\n<ul>\n<li>English</li>\n<li>Chinese</li>\n</ul>\n</details>\n<details>\n<summary> <b>2023-11-05</b>: <a href=) `Yi-6B-200K` and `Yi-34B-200K`, are open-sourced and available to the public.\n \nThis release contains two base models with the same parameter sizes as the previous\nrelease, except that the context window is extended to 200K.\n\n\n **2023-11-02**: [The base models,](#base-models) `Yi-6B` and `Yi-34B`, are open-sourced and available to the public.\n \nThe first public release contains two bilingual (English/Chinese) base models\nwith the parameter sizes of 6B and 34B. Both of them are trained with 4K\nsequence length and can be extended to 32K during inference time.\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nModels\n------\n\n\nYi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements.\n\n\nIf you want to deploy Yi models, make sure you meet the software and hardware requirements.", "### Chat models\n\n\n\n - 4-bit series models are quantized by AWQ. \n - 8-bit series models are quantized by GPTQ \n - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090).", "### Base models\n\n\n\n - 200k is roughly equivalent to 400,000 Chinese characters. \n - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run 'git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf' to download the weight.", "### Model info\n\n\n* For chat and base models\n\n\nModel: 9B series models, Intro: It is the best at coding and math in the Yi series models., Default context window: Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.\nModel: 34B series models, Intro: They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It's a cost-effective solution that's affordable and equipped with emergent ability., Default context window: 3T\n\n\n* For chat models\n\n\nFor chat model limitations, see the explanations below. ⬇️\n\n\t \n\tThe released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training.\n\t \n\tHowever, this higher diversity might amplify certain existing issues, including:\n\t+ Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.\n\t\n\t+ Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.\n\t\n\t+ Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.\n\t\n\t+ To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top\\_p, or top\\_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.](URL Tech Report</a> is published! </summary>\n</details>\n<details open>\n <summary> <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary>\n <br>\nIn the )\n [\n [Back to top ⬆️](#top) ] \n\n\n\nHow to use Yi?\n==============\n\n\n* Quick start\n\t+ Choose your path\n\t+ pip\n\t+ docker\n\t+ conda-lock\n\t+ URL\n\t+ Web demo\n* Fine-tuning\n* Quantization\n* Deployment\n* Learning hub\n\n\nQuick start\n-----------\n\n\nGetting up and running with Yi models is simple with multiple choices available.", "### Choose your path\n\n\nSelect one of the following paths to begin your journey with Yi!\n\n\n!Quick start - Choose your path", "#### Deploy Yi locally\n\n\nIf you prefer to deploy Yi models locally,\n\n\n* ‍️ and you have sufficient resources (for example, NVIDIA A800 80GB), you can choose one of the following methods:\n\n\n\t+ pip\n\t+ Docker\n\t+ conda-lock\n* ‍️ and you have limited resources (for example, a MacBook Pro), you can use URL.", "#### Not to deploy Yi locally\n\n\nIf you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options.", "##### ‍️ Run Yi with APIs\n\n\nIf you want to explore more features of Yi, you can adopt one of these methods:\n\n\n* Yi APIs (Yi official)\n\n\n\t+ Early access has been granted to some applicants. Stay tuned for the next round of access!\n* Yi APIs (Replicate)", "##### ‍️ Run Yi in playground\n\n\nIf you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options:\n\n\n* Yi-34B-Chat-Playground (Yi official)\n\n\n\t+ Access is available through a whitelist. Welcome to apply (fill out a form in English or Chinese).\n* Yi-34B-Chat-Playground (Replicate)", "##### ‍️ Chat with Yi\n\n\nIf you want to chat with Yi, you can use one of these online services, which offer a similar user experience:\n\n\n* Yi-34B-Chat (Yi official on Hugging Face)\n\n\n\t+ No registration is required.\n* Yi-34B-Chat (Yi official beta)\n\n\n\t+ Access is available through a whitelist. Welcome to apply (fill out a form in English or Chinese).\n\n\n [\n [Back to top ⬆️](#top) ]", "### Quick start - pip\n\n\nThis tutorial guides you through every step of running Yi-34B-Chat locally on an A800 (80G) and then performing inference.", "#### Step 0: Prerequisites\n\n\n* Make sure Python 3.10 or a later version is installed.\n* If you want to run other Yi models, see software and hardware requirements.", "#### Step 1: Prepare your environment\n\n\nTo set up the environment and install the required packages, execute the following command.", "#### Step 2: Download the Yi model\n\n\nYou can download the weights and tokenizer of Yi models from the following sources:\n\n\n* Hugging Face\n* ModelScope\n* WiseModel", "#### Step 3: Perform inference\n\n\nYou can perform inference with Yi chat or base models as below.", "##### Perform inference with Yi chat model\n\n\n1. Create a file named 'quick\\_start.py' and copy the following content to it.\n2. Run 'quick\\_start.py'.\n\n\nThen you can see an output similar to the one below.", "##### Perform inference with Yi base model\n\n\n* Yi-34B\n\n\nThe steps are similar to pip - Perform inference with Yi chat model.\n\n\nYou can use the existing file 'text\\_generation.py'.\n\n\nThen you can see an output similar to the one below.\n\n\n\nOutput. ⬇️ \n \n\nPrompt: Let me tell you an interesting story about cat Tom and mouse Jerry,\n\n\nGeneration: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up...\n* Yi-9B\n\n\nInput\n\n\nOutput\n\n\n [\n [Back to top ⬆️](#top) ]", "### Quick start - Docker\n\n\n\n Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️\n \nThis tutorial guides you through every step of running **Yi-34B-Chat on an A800 GPU** or **4\\*4090** locally and then performing inference.\n #### Step 0: Prerequisites\n\n\nMake sure you've installed [Step 1: Start Docker \n\n```\ndocker run -it --gpus all \\\n-v <your-model-path>: /models\nURL\n\n```\n\nAlternatively, you can pull the Yi Docker image from `URL", "#### Step 2: Perform inference\n\n\nYou can perform inference with Yi chat or base models as below.", "##### Perform inference with Yi chat model\n\n\nThe steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model).\n\n\n**Note** that the only difference is to set `model_path = '<your-model-mount-path>'` instead of `model_path = '<your-model-path>'`.", "##### Perform inference with Yi base model\n\n\nThe steps are similar to [pip - Perform inference with Yi base model](#perform-inference-with-yi-base-model).\n\n\n**Note** that the only difference is to set `--model <your-model-mount-path>'` instead of `model <your-model-path>`.`](URL and <a href=)", "### Quick start - conda-lock\n\n\n\nYou can use `[[* Step 0: Prerequisites\n* Step 1: Download URL\n* Step 2: Download Yi model\n* Step 3: Perform inference", "#### Step 0: Prerequisites\n\n\n* This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip.\n* Make sure 'git-lfs' is installed on your machine.", "#### Step 1: Download 'URL'\n\n\nTo clone the 'URL' repository, run the following command.", "#### Step 2: Download Yi model\n\n\n2.1 To clone XeIaso/yi-chat-6B-GGUF with just pointers, run the following command.\n\n\n2.2 To download a quantized Yi model (yi-chat-6b.Q2\\_K.gguf), run the following command.", "#### Step 3: Perform inference\n\n\nTo perform inference with the Yi model, you can use one of the following methods.\n\n\n* Method 1: Perform inference in terminal\n* Method 2: Perform inference in web", "##### Method 1: Perform inference in terminal\n\n\nTo compile 'URL' using 4 threads and then conduct inference, navigate to the 'URL' directory, and run the following command.\n\n\n\n> \n> ##### Tips\n> \n> \n> * Replace '/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2\\_K.gguf' with the actual path of your model.\n> * By default, the model operates in completion mode.\n> * For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run './main -h' to check detailed descriptions and usage.\n> \n> \n> \n\n\nNow you have successfully asked a question to the Yi model and got an answer!", "##### Method 2: Perform inference in web\n\n\n1. To initialize a lightweight and swift chatbot, run the following command.\n\n\nThen you can get an output like this:\n2. To access the chatbot interface, open your web browser and enter 'http://0.0.0.0:8080' into the address bar.\n\n\n!Yi model chatbot interface - URL\n3. Enter a question, such as \"How do you feed your pet fox? Please answer this question in 6 simple steps\" into the prompt window, and you will receive a corresponding answer.\n\n\n!Ask a question to Yi model - URL](URL for installing these dependencies.\n<br>\nTo install the dependencies, follow these steps:\n<ol>\n<li>\n<p>Install micromamba by following the instructions available <a href=\"URL</p>\n</li>\n<li>\n<p>Execute <code>micromamba install -y -n yi -f URL</code> to create a conda environment named <code>yi</code> and install the necessary dependencies.</p>\n</li>\n</ol>\n</details>\n<h3>Quick start - URL</h3>\n<details>\n<summary> Run Yi-chat-6B-2bits locally with URL: a step-by-step guide. ⬇️</summary> \n<br>This tutorial guides you through every step of running a quantized model (<a href=)](URL to generate fully reproducible lock files for conda environments. ⬇️</summary>\n<br>\nYou can refer to <a href=)`\n [\n [Back to top ⬆️](#top) ]", "### Web demo\n\n\nYou can build a web UI demo for Yi chat models (note that Yi base models are not supported in this senario).\n\n\nStep 1: Prepare your environment.\n\n\nStep 2: Download the Yi model.\n\n\nStep 3. To start a web service locally, run the following command.\n\n\nYou can access the web UI by entering the address provided in the console into your browser.\n\n\n!Quick start - web demo\n\n\n [\n [Back to top ⬆️](#top) ]", "### Fine-tuning\n\n\nOnce finished, you can compare the finetuned model and the base model with the following command:\n\n\nFor advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ ### Finetune code for Yi 6B and 34B", "#### Preparation", "##### From Image\n\n\nBy default, we use a small dataset from BAAI/COIG to finetune the base model.\nYou can also prepare your customized dataset in the following 'jsonl' format:\n\n\nAnd then mount them in the container to replace the default ones:", "##### From Local Server\n\n\nMake sure you have conda. If not, use\n\n\nThen, create a conda env:", "#### Hardware Setup\n\n\nFor the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended.\n\n\nFor the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA\\_VISIBLE\\_DEVICES to limit the number of GPUs (as shown in scripts/run\\_sft\\_Yi\\_34b.sh).\n\n\nA typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA\\_VISIBLE\\_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB.", "#### Quick Start\n\n\nDownload a LLM-base model to MODEL\\_PATH (6B and 34B). A typical folder of models is like:\n\n\nDownload a dataset from huggingface to local storage DATA\\_PATH, e.g. Dahoas/rm-static.\n\n\n'finetune/yi\\_example\\_dataset' has example datasets, which are modified from BAAI/COIG\n\n\n'cd' into the scripts folder, copy and paste the script, and run. For example:\n\n\nFor the Yi-6B base model, setting training\\_debug\\_steps=20 and num\\_train\\_epochs=4 can output a chat model, which takes about 20 minutes.\n\n\nFor the Yi-34B base model, it takes a relatively long time for initialization. Please be patient.", "#### Evaluation\n\n\nThen you'll see the answer from both the base model and the finetuned model.\n\n\n\n\n [\n [Back to top ⬆️](#top) ]", "### Quantization", "#### GPT-Q\n\n\nOnce finished, you can then evaluate the resulting model as follows:\n\n\nFor details, see the explanations below. ⬇️ #### GPT-Q quantization\n\n\nGPT-Q is a PTQ (Post-Training Quantization)\nmethod. It saves memory and provides potential speedups while retaining the accuracy\nof the model.\n\n\nYi models can be GPT-Q quantized without a lot of efforts.\nWe provide a step-by-step tutorial below.\n\n\nTo run GPT-Q, we will use AutoGPTQ and\nexllama.\nAnd the huggingface transformers has integrated optimum and auto-gptq to perform\nGPTQ quantization on language models.", "##### Do Quantization\n\n\nThe 'quant\\_autogptq.py' script is provided for you to perform GPT-Q quantization:", "##### Run Quantized Model\n\n\nYou can run a quantized model using the 'eval\\_quantized\\_model.py':", "#### AWQ\n\n\nOnce finished, you can then evaluate the resulting model as follows:\n\n\nFor details, see the explanations below. ⬇️ #### AWQ quantization\n\n\nAWQ is a PTQ (Post-Training Quantization)\nmethod. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs.\n\n\nYi models can be AWQ quantized without a lot of efforts.\nWe provide a step-by-step tutorial below.\n\n\nTo run AWQ, we will use AutoAWQ.", "##### Do Quantization\n\n\nThe 'quant\\_autoawq.py' script is provided for you to perform AWQ quantization:", "##### Run Quantized Model\n\n\nYou can run a quantized model using the 'eval\\_quantized\\_model.py':\n\n\n\n\n [\n [Back to top ⬆️](#top) ]", "### Deployment\n\n\nIf you want to deploy Yi models, make sure you meet the software and hardware requirements.", "#### Software requirements\n\n\nBefore using Yi quantized models, make sure you've installed the correct software listed below.", "#### Hardware requirements\n\n\nBefore deploying Yi in your environment, make sure your hardware meets the following requirements.", "##### Chat models\n\n\n\nBelow are detailed minimum VRAM requirements under different batch use cases.", "##### Base models\n\n\n\n [\n [Back to top ⬆️](#top) ]", "### Learning hub\n\n\n\n If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️\n \n\nWelcome to the Yi learning hub!\n\n\nWhether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more.\n\n\nThe content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions!\n\n\nAt the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below.\n\n\nWith all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning!", "#### Tutorials", "##### English tutorials", "##### Chinese tutorials\n\n\n\n\nWhy Yi?\n=======\n\n\n* Ecosystem\n\t+ Upstream\n\t+ Downstream\n\t\t- Serving\n\t\t- Quantization\n\t\t- Fine-tuning\n\t\t- API\n* Benchmarks\n\t+ Chat model performance\n\t+ Base model performance\n\t\t- Yi-34B and Yi-34B-200K\n\t\t- Yi-9B\n\n\nEcosystem\n---------\n\n\nYi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity.\n\n\n* Upstream\n* Downstream\n\t+ Serving\n\t+ Quantization\n\t+ Fine-tuning\n\t+ API", "### Upstream\n\n\nThe Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency.\n\n\nFor example, the Yi series models are saved in the format of the Llama model. You can directly use 'LlamaForCausalLM' and 'LlamaTokenizer' to load the model. For more information, see Use the chat model.\n\n\n [\n [Back to top ⬆️](#top) ]", "### Downstream\n\n\n\n> \n> Tip\n> \n> \n> * Feel free to create a PR and share the fantastic work you've built using the Yi series models.\n> * To help others quickly understand your work, it is recommended to use the format of ': + '.\n> \n> \n>", "#### Serving\n\n\nIf you want to get up with Yi in a few minutes, you can use the following services built upon Yi.\n\n\n* Yi-34B-Chat: you can chat with Yi using one of the following platforms:\n\n\n\t+ Yi-34B-Chat | Hugging Face\n\t+ Yi-34B-Chat | Yi Platform: Note that currently it's available through a whitelist. Welcome to apply (fill out a form in English or Chinese) and experience it firsthand!\n* Yi-6B-Chat (Replicate): you can use this model with more options by setting additional parameters and calling APIs.\n* ScaleLLM: you can use this service to run Yi models locally with added flexibility and customization.", "#### Quantization\n\n\nIf you have limited computational capabilities, you can use Yi's quantized models as follows.\n\n\nThese quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage.\n\n\n* TheBloke/Yi-34B-GPTQ\n* TheBloke/Yi-34B-GGUF\n* TheBloke/Yi-34B-AWQ", "#### Fine-tuning\n\n\nIf you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below.\n\n\n* TheBloke Models: this site hosts numerous fine-tuned models derived from various LLMs including Yi.\n\n\nThis is not an exhaustive list for Yi, but to name a few sorted on downloads:\n\n\n\t+ TheBloke/dolphin-2\\_2-yi-34b-AWQ\n\t+ TheBloke/Yi-34B-Chat-AWQ\n\t+ TheBloke/Yi-34B-Chat-GPTQ\n* SUSTech/SUS-Chat-34B: this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the Open LLM Leaderboard.\n* OrionStarAI/OrionStar-Yi-34B-Chat-Llama: this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the OpenCompass LLM Leaderboard.\n* NousResearch/Nous-Capybara-34B: this model is trained with 200K context length and 3 epochs on the Capybara dataset.", "#### API\n\n\n* amazing-openai-api: this tool converts Yi model APIs into the OpenAI API format out of the box.\n* LlamaEdge: this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust.\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nTech report\n-----------\n\n\nFor detailed capabilities of the Yi series model, see Yi: Open Foundation Models by 01.AI.\n\n\nBenchmarks\n----------\n\n\n* Chat model performance\n* Base model performance", "### Chat model performance\n\n\nYi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more.\n\n\n!Chat model performance\n\n\n\n Evaluation methods and challenges. ⬇️ \n* Evaluation methods: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA.\n* Zero-shot vs. few-shot: in chat models, the zero-shot approach is more commonly employed.\n* Evaluation strategy: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text.\n* Challenges faced: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results.\n\n\n**\\***: C-Eval results are evaluated on the validation datasets", "### Base model performance", "#### Yi-34B and Yi-34B-200K\n\n\nThe Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more.\n\n\n!Base model performance\n\n\n\n Evaluation methods. ⬇️\n* Disparity in results: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass.\n* Investigation findings: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences.\n* Uniform benchmarking process: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content.\n* Efforts to retrieve unreported scores: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline.\n* Extensive model evaluation: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension.\n* Special configurations: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category \"Math & Code\".\n* Falcon-180B caveat: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated.", "#### Yi-9B\n\n\nYi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.\n\n\n!Yi-9B benchmark - details\n\n\n* In terms of overall ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - overall\n* In terms of coding ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - code\n* In terms of math ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - math\n* In terms of common sense and reasoning ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - text\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nWho can use Yi?\n===============\n\n\nEveryone!\n\n\n* The Yi series models are free for personal usage, academic purposes, and commercial use. All usage must adhere to the Yi Series Models Community License Agreement 2.1\n* For free commercial use, you only need to complete this form to get a Yi Model Commercial License.\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nMisc.\n=====", "### Acknowledgments\n\n\nA heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation.\n\n\n![yi contributors](URL\n\n\n [\n [Back to top ⬆️](#top) ]", "### Disclaimer\n\n\nWe use data compliance checking algorithms during the training process, to\nensure the compliance of the trained model to the best of our ability. Due to\ncomplex data and the diversity of language model usage scenarios, we cannot\nguarantee that the model will generate correct, and reasonable output in all\nscenarios. Please be aware that there is still a risk of the model producing\nproblematic outputs. We will not be responsible for any risks and issues\nresulting from misuse, misguidance, illegal usage, and related misinformation,\nas well as any associated data security concerns.\n\n\n [\n [Back to top ⬆️](#top) ]", "### License\n\n\nThe source code in this repo is licensed under the Apache 2.0\nlicense. The Yi series models are fully open for academic research and free for commercial use, with automatic permission granted upon application. All usage must adhere to the Yi Series Models Community License Agreement 2.1.\nFor free commercial use, you only need to send an email to get official commercial permission.\n\n\n [\n [Back to top ⬆️](#top) ]" ]
[ "TAGS\n#gguf #arxiv-2403.04652 #arxiv-2311.16502 #arxiv-2401.11944 #region-us \n", "### Building the Next Generation of Open-Source and Bilingual LLMs\n\n\n\n\n[Hugging Face](URL target=) • [ModelScope](URL target=) • ️ [WiseModel](URL target=)\n\n\n\n\n ‍ Ask questions or discuss ideas on [GitHub](01-ai/Yi · Discussions) \n\n\n\n\n Join us on [Discord](URL target=) or [WeChat](有官方的微信群嘛 · Issue #43 · 01-ai/Yi) \n\n\n\n\n Check out [Grow at [Yi Learning Hub](#learning-hub)](URL Yi Tech Report </a>\n</p> \n<p align=)\n\n\n\n\n---\n\n\n\n Table of Contents\n* What is Yi?\n\t+ Introduction\n\t+ Models\n\t\t- Chat models\n\t\t- Base models\n\t\t- Model info\n\t+ News\n* How to use Yi?\n\t+ Quick start\n\t\t- Choose your path\n\t\t- pip\n\t\t- docker\n\t\t- URL\n\t\t- conda-lock\n\t\t- Web demo\n\t+ Fine-tuning\n\t+ Quantization\n\t+ Deployment\n\t+ Learning hub\n* Why Yi?\n\t+ Ecosystem\n\t\t- Upstream\n\t\t- Downstream\n\t\t\t* Serving\n\t\t\t* Quantization\n\t\t\t* Fine-tuning\n\t\t\t* API\n\t+ Benchmarks\n\t\t- Base model performance\n\t\t- Chat model performance\n\t+ Tech report\n\t\t- Citation\n* Who can use Yi?\n* Misc.\n\t+ Acknowledgements\n\t+ Disclaimer\n\t+ License\n\n\n\n\n\n---\n\n\nWhat is Yi?\n===========\n\n\nIntroduction\n------------\n\n\n* The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI.\n* Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,\n* Yi-34B-Chat model landed in second place (following GPT-4 Turbo), outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024).\n* Yi-34B model ranked first among all existing open-source models (such as Falcon-180B, Llama-70B, Claude) in both English and Chinese on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023).\n* (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem.\n\n\n If you're interested in Yi's adoption of Llama architecture and license usage policy, see Yi's relation with Llama. ⬇️ \n\n\n> \n> TL;DR\n> \n> \n> The Yi series models adopt the same model architecture as Llama but are NOT derivatives of Llama.\n> \n> \n> \n\n+ Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018.\n+ Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi.\n+ Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems.\n+ However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights.\n\n\n\t- As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure.\n\t- Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the Alpaca Leaderboard in Dec 2023.\n\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nNews\n----\n\n\n\n **2024-03-16**: The `Yi-9B-200K` is open-sourced and available to the public.\n\n\n **2024-03-08**: [**2024-03-06**: The `Yi-9B` is open-sourced and available to the public.\n \n\n`Yi-9B` stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.\n\n\n **2024-01-23**: The Yi-VL models, `[`[Chat models](URL (based on data available up to January 2024).</li>\n</details>\n<details>\n<summary> <b>2023-11-23</b>: <a href=) are open-sourced and available to the public.`](URL and <code><a href=)`\n \nThis release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ.\n* 'Yi-34B-Chat'\n* 'Yi-34B-Chat-4bits'\n* 'Yi-34B-Chat-8bits'\n* 'Yi-6B-Chat'\n* 'Yi-6B-Chat-4bits'\n* 'Yi-6B-Chat-8bits'\n\n\nYou can try some of them interactively at:\n\n\n* Hugging Face\n* Replicate\n\n\n\n\n **2023-11-23**: The Yi Series Models Community License Agreement is updated to [The base models,](URL\n</details>\n<details> \n<summary> <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary>\n<br>Application form:\n<ul>\n<li>English</li>\n<li>Chinese</li>\n</ul>\n</details>\n<details>\n<summary> <b>2023-11-05</b>: <a href=) `Yi-6B-200K` and `Yi-34B-200K`, are open-sourced and available to the public.\n \nThis release contains two base models with the same parameter sizes as the previous\nrelease, except that the context window is extended to 200K.\n\n\n **2023-11-02**: [The base models,](#base-models) `Yi-6B` and `Yi-34B`, are open-sourced and available to the public.\n \nThe first public release contains two bilingual (English/Chinese) base models\nwith the parameter sizes of 6B and 34B. Both of them are trained with 4K\nsequence length and can be extended to 32K during inference time.\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nModels\n------\n\n\nYi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements.\n\n\nIf you want to deploy Yi models, make sure you meet the software and hardware requirements.", "### Chat models\n\n\n\n - 4-bit series models are quantized by AWQ. \n - 8-bit series models are quantized by GPTQ \n - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090).", "### Base models\n\n\n\n - 200k is roughly equivalent to 400,000 Chinese characters. \n - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run 'git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf' to download the weight.", "### Model info\n\n\n* For chat and base models\n\n\nModel: 9B series models, Intro: It is the best at coding and math in the Yi series models., Default context window: Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens.\nModel: 34B series models, Intro: They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It's a cost-effective solution that's affordable and equipped with emergent ability., Default context window: 3T\n\n\n* For chat models\n\n\nFor chat model limitations, see the explanations below. ⬇️\n\n\t \n\tThe released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training.\n\t \n\tHowever, this higher diversity might amplify certain existing issues, including:\n\t+ Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.\n\t\n\t+ Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.\n\t\n\t+ Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.\n\t\n\t+ To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, top\\_p, or top\\_k. These adjustments can help in the balance between creativity and coherence in the model's outputs.](URL Tech Report</a> is published! </summary>\n</details>\n<details open>\n <summary> <b>2024-03-07</b>: The long text capability of the Yi-34B-200K has been enhanced. </summary>\n <br>\nIn the )\n [\n [Back to top ⬆️](#top) ] \n\n\n\nHow to use Yi?\n==============\n\n\n* Quick start\n\t+ Choose your path\n\t+ pip\n\t+ docker\n\t+ conda-lock\n\t+ URL\n\t+ Web demo\n* Fine-tuning\n* Quantization\n* Deployment\n* Learning hub\n\n\nQuick start\n-----------\n\n\nGetting up and running with Yi models is simple with multiple choices available.", "### Choose your path\n\n\nSelect one of the following paths to begin your journey with Yi!\n\n\n!Quick start - Choose your path", "#### Deploy Yi locally\n\n\nIf you prefer to deploy Yi models locally,\n\n\n* ‍️ and you have sufficient resources (for example, NVIDIA A800 80GB), you can choose one of the following methods:\n\n\n\t+ pip\n\t+ Docker\n\t+ conda-lock\n* ‍️ and you have limited resources (for example, a MacBook Pro), you can use URL.", "#### Not to deploy Yi locally\n\n\nIf you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options.", "##### ‍️ Run Yi with APIs\n\n\nIf you want to explore more features of Yi, you can adopt one of these methods:\n\n\n* Yi APIs (Yi official)\n\n\n\t+ Early access has been granted to some applicants. Stay tuned for the next round of access!\n* Yi APIs (Replicate)", "##### ‍️ Run Yi in playground\n\n\nIf you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options:\n\n\n* Yi-34B-Chat-Playground (Yi official)\n\n\n\t+ Access is available through a whitelist. Welcome to apply (fill out a form in English or Chinese).\n* Yi-34B-Chat-Playground (Replicate)", "##### ‍️ Chat with Yi\n\n\nIf you want to chat with Yi, you can use one of these online services, which offer a similar user experience:\n\n\n* Yi-34B-Chat (Yi official on Hugging Face)\n\n\n\t+ No registration is required.\n* Yi-34B-Chat (Yi official beta)\n\n\n\t+ Access is available through a whitelist. Welcome to apply (fill out a form in English or Chinese).\n\n\n [\n [Back to top ⬆️](#top) ]", "### Quick start - pip\n\n\nThis tutorial guides you through every step of running Yi-34B-Chat locally on an A800 (80G) and then performing inference.", "#### Step 0: Prerequisites\n\n\n* Make sure Python 3.10 or a later version is installed.\n* If you want to run other Yi models, see software and hardware requirements.", "#### Step 1: Prepare your environment\n\n\nTo set up the environment and install the required packages, execute the following command.", "#### Step 2: Download the Yi model\n\n\nYou can download the weights and tokenizer of Yi models from the following sources:\n\n\n* Hugging Face\n* ModelScope\n* WiseModel", "#### Step 3: Perform inference\n\n\nYou can perform inference with Yi chat or base models as below.", "##### Perform inference with Yi chat model\n\n\n1. Create a file named 'quick\\_start.py' and copy the following content to it.\n2. Run 'quick\\_start.py'.\n\n\nThen you can see an output similar to the one below.", "##### Perform inference with Yi base model\n\n\n* Yi-34B\n\n\nThe steps are similar to pip - Perform inference with Yi chat model.\n\n\nYou can use the existing file 'text\\_generation.py'.\n\n\nThen you can see an output similar to the one below.\n\n\n\nOutput. ⬇️ \n \n\nPrompt: Let me tell you an interesting story about cat Tom and mouse Jerry,\n\n\nGeneration: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up...\n* Yi-9B\n\n\nInput\n\n\nOutput\n\n\n [\n [Back to top ⬆️](#top) ]", "### Quick start - Docker\n\n\n\n Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️\n \nThis tutorial guides you through every step of running **Yi-34B-Chat on an A800 GPU** or **4\\*4090** locally and then performing inference.\n #### Step 0: Prerequisites\n\n\nMake sure you've installed [Step 1: Start Docker \n\n```\ndocker run -it --gpus all \\\n-v <your-model-path>: /models\nURL\n\n```\n\nAlternatively, you can pull the Yi Docker image from `URL", "#### Step 2: Perform inference\n\n\nYou can perform inference with Yi chat or base models as below.", "##### Perform inference with Yi chat model\n\n\nThe steps are similar to [pip - Perform inference with Yi chat model](#perform-inference-with-yi-chat-model).\n\n\n**Note** that the only difference is to set `model_path = '<your-model-mount-path>'` instead of `model_path = '<your-model-path>'`.", "##### Perform inference with Yi base model\n\n\nThe steps are similar to [pip - Perform inference with Yi base model](#perform-inference-with-yi-base-model).\n\n\n**Note** that the only difference is to set `--model <your-model-mount-path>'` instead of `model <your-model-path>`.`](URL and <a href=)", "### Quick start - conda-lock\n\n\n\nYou can use `[[* Step 0: Prerequisites\n* Step 1: Download URL\n* Step 2: Download Yi model\n* Step 3: Perform inference", "#### Step 0: Prerequisites\n\n\n* This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip.\n* Make sure 'git-lfs' is installed on your machine.", "#### Step 1: Download 'URL'\n\n\nTo clone the 'URL' repository, run the following command.", "#### Step 2: Download Yi model\n\n\n2.1 To clone XeIaso/yi-chat-6B-GGUF with just pointers, run the following command.\n\n\n2.2 To download a quantized Yi model (yi-chat-6b.Q2\\_K.gguf), run the following command.", "#### Step 3: Perform inference\n\n\nTo perform inference with the Yi model, you can use one of the following methods.\n\n\n* Method 1: Perform inference in terminal\n* Method 2: Perform inference in web", "##### Method 1: Perform inference in terminal\n\n\nTo compile 'URL' using 4 threads and then conduct inference, navigate to the 'URL' directory, and run the following command.\n\n\n\n> \n> ##### Tips\n> \n> \n> * Replace '/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2\\_K.gguf' with the actual path of your model.\n> * By default, the model operates in completion mode.\n> * For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run './main -h' to check detailed descriptions and usage.\n> \n> \n> \n\n\nNow you have successfully asked a question to the Yi model and got an answer!", "##### Method 2: Perform inference in web\n\n\n1. To initialize a lightweight and swift chatbot, run the following command.\n\n\nThen you can get an output like this:\n2. To access the chatbot interface, open your web browser and enter 'http://0.0.0.0:8080' into the address bar.\n\n\n!Yi model chatbot interface - URL\n3. Enter a question, such as \"How do you feed your pet fox? Please answer this question in 6 simple steps\" into the prompt window, and you will receive a corresponding answer.\n\n\n!Ask a question to Yi model - URL](URL for installing these dependencies.\n<br>\nTo install the dependencies, follow these steps:\n<ol>\n<li>\n<p>Install micromamba by following the instructions available <a href=\"URL</p>\n</li>\n<li>\n<p>Execute <code>micromamba install -y -n yi -f URL</code> to create a conda environment named <code>yi</code> and install the necessary dependencies.</p>\n</li>\n</ol>\n</details>\n<h3>Quick start - URL</h3>\n<details>\n<summary> Run Yi-chat-6B-2bits locally with URL: a step-by-step guide. ⬇️</summary> \n<br>This tutorial guides you through every step of running a quantized model (<a href=)](URL to generate fully reproducible lock files for conda environments. ⬇️</summary>\n<br>\nYou can refer to <a href=)`\n [\n [Back to top ⬆️](#top) ]", "### Web demo\n\n\nYou can build a web UI demo for Yi chat models (note that Yi base models are not supported in this senario).\n\n\nStep 1: Prepare your environment.\n\n\nStep 2: Download the Yi model.\n\n\nStep 3. To start a web service locally, run the following command.\n\n\nYou can access the web UI by entering the address provided in the console into your browser.\n\n\n!Quick start - web demo\n\n\n [\n [Back to top ⬆️](#top) ]", "### Fine-tuning\n\n\nOnce finished, you can compare the finetuned model and the base model with the following command:\n\n\nFor advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ ### Finetune code for Yi 6B and 34B", "#### Preparation", "##### From Image\n\n\nBy default, we use a small dataset from BAAI/COIG to finetune the base model.\nYou can also prepare your customized dataset in the following 'jsonl' format:\n\n\nAnd then mount them in the container to replace the default ones:", "##### From Local Server\n\n\nMake sure you have conda. If not, use\n\n\nThen, create a conda env:", "#### Hardware Setup\n\n\nFor the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended.\n\n\nFor the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDA\\_VISIBLE\\_DEVICES to limit the number of GPUs (as shown in scripts/run\\_sft\\_Yi\\_34b.sh).\n\n\nA typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDA\\_VISIBLE\\_DEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB.", "#### Quick Start\n\n\nDownload a LLM-base model to MODEL\\_PATH (6B and 34B). A typical folder of models is like:\n\n\nDownload a dataset from huggingface to local storage DATA\\_PATH, e.g. Dahoas/rm-static.\n\n\n'finetune/yi\\_example\\_dataset' has example datasets, which are modified from BAAI/COIG\n\n\n'cd' into the scripts folder, copy and paste the script, and run. For example:\n\n\nFor the Yi-6B base model, setting training\\_debug\\_steps=20 and num\\_train\\_epochs=4 can output a chat model, which takes about 20 minutes.\n\n\nFor the Yi-34B base model, it takes a relatively long time for initialization. Please be patient.", "#### Evaluation\n\n\nThen you'll see the answer from both the base model and the finetuned model.\n\n\n\n\n [\n [Back to top ⬆️](#top) ]", "### Quantization", "#### GPT-Q\n\n\nOnce finished, you can then evaluate the resulting model as follows:\n\n\nFor details, see the explanations below. ⬇️ #### GPT-Q quantization\n\n\nGPT-Q is a PTQ (Post-Training Quantization)\nmethod. It saves memory and provides potential speedups while retaining the accuracy\nof the model.\n\n\nYi models can be GPT-Q quantized without a lot of efforts.\nWe provide a step-by-step tutorial below.\n\n\nTo run GPT-Q, we will use AutoGPTQ and\nexllama.\nAnd the huggingface transformers has integrated optimum and auto-gptq to perform\nGPTQ quantization on language models.", "##### Do Quantization\n\n\nThe 'quant\\_autogptq.py' script is provided for you to perform GPT-Q quantization:", "##### Run Quantized Model\n\n\nYou can run a quantized model using the 'eval\\_quantized\\_model.py':", "#### AWQ\n\n\nOnce finished, you can then evaluate the resulting model as follows:\n\n\nFor details, see the explanations below. ⬇️ #### AWQ quantization\n\n\nAWQ is a PTQ (Post-Training Quantization)\nmethod. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs.\n\n\nYi models can be AWQ quantized without a lot of efforts.\nWe provide a step-by-step tutorial below.\n\n\nTo run AWQ, we will use AutoAWQ.", "##### Do Quantization\n\n\nThe 'quant\\_autoawq.py' script is provided for you to perform AWQ quantization:", "##### Run Quantized Model\n\n\nYou can run a quantized model using the 'eval\\_quantized\\_model.py':\n\n\n\n\n [\n [Back to top ⬆️](#top) ]", "### Deployment\n\n\nIf you want to deploy Yi models, make sure you meet the software and hardware requirements.", "#### Software requirements\n\n\nBefore using Yi quantized models, make sure you've installed the correct software listed below.", "#### Hardware requirements\n\n\nBefore deploying Yi in your environment, make sure your hardware meets the following requirements.", "##### Chat models\n\n\n\nBelow are detailed minimum VRAM requirements under different batch use cases.", "##### Base models\n\n\n\n [\n [Back to top ⬆️](#top) ]", "### Learning hub\n\n\n\n If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️\n \n\nWelcome to the Yi learning hub!\n\n\nWhether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more.\n\n\nThe content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions!\n\n\nAt the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below.\n\n\nWith all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning!", "#### Tutorials", "##### English tutorials", "##### Chinese tutorials\n\n\n\n\nWhy Yi?\n=======\n\n\n* Ecosystem\n\t+ Upstream\n\t+ Downstream\n\t\t- Serving\n\t\t- Quantization\n\t\t- Fine-tuning\n\t\t- API\n* Benchmarks\n\t+ Chat model performance\n\t+ Base model performance\n\t\t- Yi-34B and Yi-34B-200K\n\t\t- Yi-9B\n\n\nEcosystem\n---------\n\n\nYi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity.\n\n\n* Upstream\n* Downstream\n\t+ Serving\n\t+ Quantization\n\t+ Fine-tuning\n\t+ API", "### Upstream\n\n\nThe Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency.\n\n\nFor example, the Yi series models are saved in the format of the Llama model. You can directly use 'LlamaForCausalLM' and 'LlamaTokenizer' to load the model. For more information, see Use the chat model.\n\n\n [\n [Back to top ⬆️](#top) ]", "### Downstream\n\n\n\n> \n> Tip\n> \n> \n> * Feel free to create a PR and share the fantastic work you've built using the Yi series models.\n> * To help others quickly understand your work, it is recommended to use the format of ': + '.\n> \n> \n>", "#### Serving\n\n\nIf you want to get up with Yi in a few minutes, you can use the following services built upon Yi.\n\n\n* Yi-34B-Chat: you can chat with Yi using one of the following platforms:\n\n\n\t+ Yi-34B-Chat | Hugging Face\n\t+ Yi-34B-Chat | Yi Platform: Note that currently it's available through a whitelist. Welcome to apply (fill out a form in English or Chinese) and experience it firsthand!\n* Yi-6B-Chat (Replicate): you can use this model with more options by setting additional parameters and calling APIs.\n* ScaleLLM: you can use this service to run Yi models locally with added flexibility and customization.", "#### Quantization\n\n\nIf you have limited computational capabilities, you can use Yi's quantized models as follows.\n\n\nThese quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage.\n\n\n* TheBloke/Yi-34B-GPTQ\n* TheBloke/Yi-34B-GGUF\n* TheBloke/Yi-34B-AWQ", "#### Fine-tuning\n\n\nIf you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below.\n\n\n* TheBloke Models: this site hosts numerous fine-tuned models derived from various LLMs including Yi.\n\n\nThis is not an exhaustive list for Yi, but to name a few sorted on downloads:\n\n\n\t+ TheBloke/dolphin-2\\_2-yi-34b-AWQ\n\t+ TheBloke/Yi-34B-Chat-AWQ\n\t+ TheBloke/Yi-34B-Chat-GPTQ\n* SUSTech/SUS-Chat-34B: this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the Open LLM Leaderboard.\n* OrionStarAI/OrionStar-Yi-34B-Chat-Llama: this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the OpenCompass LLM Leaderboard.\n* NousResearch/Nous-Capybara-34B: this model is trained with 200K context length and 3 epochs on the Capybara dataset.", "#### API\n\n\n* amazing-openai-api: this tool converts Yi model APIs into the OpenAI API format out of the box.\n* LlamaEdge: this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust.\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nTech report\n-----------\n\n\nFor detailed capabilities of the Yi series model, see Yi: Open Foundation Models by 01.AI.\n\n\nBenchmarks\n----------\n\n\n* Chat model performance\n* Base model performance", "### Chat model performance\n\n\nYi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more.\n\n\n!Chat model performance\n\n\n\n Evaluation methods and challenges. ⬇️ \n* Evaluation methods: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA.\n* Zero-shot vs. few-shot: in chat models, the zero-shot approach is more commonly employed.\n* Evaluation strategy: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text.\n* Challenges faced: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results.\n\n\n**\\***: C-Eval results are evaluated on the validation datasets", "### Base model performance", "#### Yi-34B and Yi-34B-200K\n\n\nThe Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more.\n\n\n!Base model performance\n\n\n\n Evaluation methods. ⬇️\n* Disparity in results: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass.\n* Investigation findings: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences.\n* Uniform benchmarking process: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content.\n* Efforts to retrieve unreported scores: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline.\n* Extensive model evaluation: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension.\n* Special configurations: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category \"Math & Code\".\n* Falcon-180B caveat: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated.", "#### Yi-9B\n\n\nYi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension.\n\n\n!Yi-9B benchmark - details\n\n\n* In terms of overall ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - overall\n* In terms of coding ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - code\n* In terms of math ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - math\n* In terms of common sense and reasoning ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B.\n\n\n!Yi-9B benchmark - text\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nWho can use Yi?\n===============\n\n\nEveryone!\n\n\n* The Yi series models are free for personal usage, academic purposes, and commercial use. All usage must adhere to the Yi Series Models Community License Agreement 2.1\n* For free commercial use, you only need to complete this form to get a Yi Model Commercial License.\n\n\n [\n [Back to top ⬆️](#top) ] \n\n\n\nMisc.\n=====", "### Acknowledgments\n\n\nA heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation.\n\n\n![yi contributors](URL\n\n\n [\n [Back to top ⬆️](#top) ]", "### Disclaimer\n\n\nWe use data compliance checking algorithms during the training process, to\nensure the compliance of the trained model to the best of our ability. Due to\ncomplex data and the diversity of language model usage scenarios, we cannot\nguarantee that the model will generate correct, and reasonable output in all\nscenarios. Please be aware that there is still a risk of the model producing\nproblematic outputs. We will not be responsible for any risks and issues\nresulting from misuse, misguidance, illegal usage, and related misinformation,\nas well as any associated data security concerns.\n\n\n [\n [Back to top ⬆️](#top) ]", "### License\n\n\nThe source code in this repo is licensed under the Apache 2.0\nlicense. The Yi series models are fully open for academic research and free for commercial use, with automatic permission granted upon application. All usage must adhere to the Yi Series Models Community License Agreement 2.1.\nFor free commercial use, you only need to send an email to get official commercial permission.\n\n\n [\n [Back to top ⬆️](#top) ]" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
LuisGon/Fifith_Model
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T01:52:20+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0_dataup_noreplacerej_40g_iter_3 This model is a fine-tuned version of [ZhangShenao/0.0_dataup_noreplacerej_40g_iter_2](https://huggingface.co/ZhangShenao/0.0_dataup_noreplacerej_40g_iter_2) on the ZhangShenao/0.0_dataup_noreplacerej_40g_dataset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["ZhangShenao/0.0_dataup_noreplacerej_40g_dataset"], "base_model": "ZhangShenao/0.0_dataup_noreplacerej_40g_iter_2", "model-index": [{"name": "0.0_dataup_noreplacerej_40g_iter_3", "results": []}]}
ZhangShenao/0.0_dataup_noreplacerej_40g_iter_3
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:ZhangShenao/0.0_dataup_noreplacerej_40g_dataset", "base_model:ZhangShenao/0.0_dataup_noreplacerej_40g_iter_2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T01:53:06+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-ZhangShenao/0.0_dataup_noreplacerej_40g_dataset #base_model-ZhangShenao/0.0_dataup_noreplacerej_40g_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.0_dataup_noreplacerej_40g_iter_3 This model is a fine-tuned version of ZhangShenao/0.0_dataup_noreplacerej_40g_iter_2 on the ZhangShenao/0.0_dataup_noreplacerej_40g_dataset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.0_dataup_noreplacerej_40g_iter_3\n\nThis model is a fine-tuned version of ZhangShenao/0.0_dataup_noreplacerej_40g_iter_2 on the ZhangShenao/0.0_dataup_noreplacerej_40g_dataset dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 128\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-ZhangShenao/0.0_dataup_noreplacerej_40g_dataset #base_model-ZhangShenao/0.0_dataup_noreplacerej_40g_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.0_dataup_noreplacerej_40g_iter_3\n\nThis model is a fine-tuned version of ZhangShenao/0.0_dataup_noreplacerej_40g_iter_2 on the ZhangShenao/0.0_dataup_noreplacerej_40g_dataset dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 128\n- total_eval_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ibivibiv/strix-rufipes-70b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/strix-rufipes-70b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/strix-rufipes-70b-GGUF/resolve/main/strix-rufipes-70b.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama2", "library_name": "transformers", "tags": ["logic", "planning"], "base_model": "ibivibiv/strix-rufipes-70b", "quantized_by": "mradermacher"}
mradermacher/strix-rufipes-70b-GGUF
null
[ "transformers", "gguf", "logic", "planning", "en", "base_model:ibivibiv/strix-rufipes-70b", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-13T01:54:59+00:00
[]
[ "en" ]
TAGS #transformers #gguf #logic #planning #en #base_model-ibivibiv/strix-rufipes-70b #license-llama2 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #logic #planning #en #base_model-ibivibiv/strix-rufipes-70b #license-llama2 #endpoints_compatible #region-us \n" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deeepfake-audio-555 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4156 - Accuracy: 0.9247 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6428 | 1.0 | 46 | 0.6271 | 0.7204 | | 0.4622 | 2.0 | 92 | 0.4054 | 0.8602 | | 0.3098 | 3.0 | 138 | 0.5667 | 0.8172 | | 0.2696 | 4.0 | 184 | 0.4179 | 0.8817 | | 0.2806 | 5.0 | 230 | 0.4129 | 0.8710 | | 0.2078 | 6.0 | 276 | 0.3541 | 0.9140 | | 0.1652 | 7.0 | 322 | 0.3338 | 0.9140 | | 0.0871 | 8.0 | 368 | 0.4072 | 0.9140 | | 0.1267 | 9.0 | 414 | 0.3649 | 0.9247 | | 0.0651 | 10.0 | 460 | 0.3436 | 0.9355 | | 0.0976 | 11.0 | 506 | 0.4163 | 0.9140 | | 0.0186 | 12.0 | 552 | 0.4164 | 0.9247 | | 0.0324 | 13.0 | 598 | 0.4156 | 0.9247 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["audiofolder"], "metrics": ["accuracy"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "deeepfake-audio-555", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "audiofolder", "type": "audiofolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9247311827956989, "name": "Accuracy"}]}]}]}
Hemg/deeepfake-audio-555
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:audiofolder", "base_model:facebook/wav2vec2-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-13T01:56:55+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #dataset-audiofolder #base_model-facebook/wav2vec2-base #license-apache-2.0 #model-index #endpoints_compatible #region-us
deeepfake-audio-555 =================== This model is a fine-tuned version of facebook/wav2vec2-base on the audiofolder dataset. It achieves the following results on the evaluation set: * Loss: 0.4156 * Accuracy: 0.9247 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.01 * num\_epochs: 16 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 16", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #dataset-audiofolder #base_model-facebook/wav2vec2-base #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 16", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# c4ai-command-r-plus - EXL2 7.0bpw This is a 7.0bpw EXL2 quant of [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus) Details about the model can be found at the above model page. ## Turbodep EXL2 Quants This repo only has specific quants not already done at [turboderp/command-r-plus-103B-exl2](https://huggingface.co/turboderp/command-r-plus-103B-exl2) Quants marked as turboderp can be downloaded from that repo. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. | Quant Level | Perplexity Score | Repo | |-------------|------------------|------| | 6.0 | 4.7068 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 5.5 | 4.7136 | Dracones | | 5.0 | 4.7309 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.5 | 4.8111 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.25 | 4.8292 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.0 | 4.8603 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.75 | 4.9112 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.5 | 4.9592 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.25 | 5.0631 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.0 | 5.2050 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 2.75 | 5.3820 | Dracones | | 2.5 | 5.6681 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 2.25 | 5.9769 | Dracones | ## EQ Bench Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. | Quant Size | Alpaca | ChatML | Command-R | Command-R-Plus | |------------|--------|--------|--------|--------| | 6.0 | 70.77 | 62.58 | 75.81 | 74.95 | | 5.5 | 71.93 | 67.7 | 74.9 | 75.48 | | 5.0 | 69.51 | 63.94 | 74.92 | 75.28 | _Note:_ EQ Bench scripting not working well, other quants may not be tested. ### Command-R-Plus Template This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS_TOKEN into the starter prompt. _text-generation-webui/instruction-templates/Command-R-Plus.yaml_: ```yaml instruction_template: |- {%- if messages[0]['role'] == 'system' -%} {%- set loop_messages = messages[1:] -%} {%- set system_message = messages[0]['content'] -%} {%- elif false == true -%} {%- set loop_messages = messages -%} {%- set system_message = 'You are Command-R, a brilliant, sophisticated, AI-assistant trained to assist human users by providing thorough responses. You are trained by Cohere.' -%} {%- else -%} {%- set loop_messages = messages -%} {%- set system_message = false -%} {%- endif -%} {%- if system_message != false -%} {{ '<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' + system_message + '<|END_OF_TURN_TOKEN|>' }} {%- endif -%} {%- for message in loop_messages -%} {%- set content = message['content'] -%} {%- if message['role'] == 'user' -%} {{ '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} {%- elif message['role'] == 'assistant' -%} {{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' }} {%- endif -%} ``` ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="c4ai-command-r-plus" BIT_PRECISIONS=(8.0 7.5 7.0 6.5 5.5 2.75 2.25) # MODEL_NAME="turboderp_command-r-plus-103B" # BIT_PRECISIONS=(6.0 5.0 4.5 4.25 4.0 3.75 3.5 3.25 3.0 2.5) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # MODEL_DIR="models/${MODEL_NAME}-exl2_${BIT_PRECISION}bpw" if [ -d "$MODEL_DIR" ]; then output=$(python test_inference.py -m "$MODEL_DIR" -gs 22,24 -ed data/wikitext/wikitext-2-v1.parquet) score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" fi done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="c4ai-command-r-plus" # Define variables MODEL_DIR="models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(5.0) BIT_PRECISIONS=(8.0 7.5 6.5 5.5 2.75 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["exl2"]}
Dracones/c4ai-command-r-plus_exl2_7.0bpw
null
[ "transformers", "safetensors", "cohere", "text-generation", "exl2", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "7-bit", "region:us" ]
null
2024-04-13T01:59:17+00:00
[]
[ "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar" ]
TAGS #transformers #safetensors #cohere #text-generation #exl2 #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #7-bit #region-us
c4ai-command-r-plus - EXL2 7.0bpw ================================= This is a 7.0bpw EXL2 quant of CohereForAI/c4ai-command-r-plus Details about the model can be found at the above model page. Turbodep EXL2 Quants -------------------- This repo only has specific quants not already done at turboderp/command-r-plus-103B-exl2 Quants marked as turboderp can be downloaded from that repo. EXL2 Version ------------ These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. Perplexity Scoring ------------------ Below are the perplexity scores for the EXL2 models. A lower score is better. Quant Level: 6.0, Perplexity Score: 4.7068, Repo: turboderp Quant Level: 5.5, Perplexity Score: 4.7136, Repo: Dracones Quant Level: 5.0, Perplexity Score: 4.7309, Repo: turboderp Quant Level: 4.5, Perplexity Score: 4.8111, Repo: turboderp Quant Level: 4.25, Perplexity Score: 4.8292, Repo: turboderp Quant Level: 4.0, Perplexity Score: 4.8603, Repo: turboderp Quant Level: 3.75, Perplexity Score: 4.9112, Repo: turboderp Quant Level: 3.5, Perplexity Score: 4.9592, Repo: turboderp Quant Level: 3.25, Perplexity Score: 5.0631, Repo: turboderp Quant Level: 3.0, Perplexity Score: 5.2050, Repo: turboderp Quant Level: 2.75, Perplexity Score: 5.3820, Repo: Dracones Quant Level: 2.5, Perplexity Score: 5.6681, Repo: turboderp Quant Level: 2.25, Perplexity Score: 5.9769, Repo: Dracones EQ Bench -------- Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. *Note:* EQ Bench scripting not working well, other quants may not be tested. ### Command-R-Plus Template This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\_TOKEN into the starter prompt. *text-generation-webui/instruction-templates/Command-R-Plus.yaml*: ### Perplexity Script This was the script used for perplexity testing. Quant Details ------------- This is the script used for quantization.
[ "### Command-R-Plus Template\n\n\nThis is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\\_TOKEN into the starter prompt.\n\n\n*text-generation-webui/instruction-templates/Command-R-Plus.yaml*:", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #cohere #text-generation #exl2 #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #7-bit #region-us \n", "### Command-R-Plus Template\n\n\nThis is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\\_TOKEN into the starter prompt.\n\n\n*text-generation-webui/instruction-templates/Command-R-Plus.yaml*:", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
text-to-image
diffusers
# Hyper Realism 1.2 Original page: https://civitai.com/models/158959?modelVersionId=178706 ![Free AI image geneator Hyper Realism Samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/81542ke1L3FQ4TzRgHhKr.png) (Click for larger) Top left: iphone, Professional fine details photo of pretty cute little girl from kazan, tatarstan kid in the postsoviet suburbia, tatar, detailed photo, beautiful eyes. instagram, portrait Top right: analog style 70s color photograph of young jean claude van damme in Double Impact, star wars behind the scenes Bottom left: Hyperrealistic 1990 movie screenshot Santa Claus with wife and daughter enjoying wine with candles. sitting with a pretty cute little girl, Closeup Faces, Gift Birthday Theme by Gil_Elvgren and Haddon_Sundblom Bottom right: analog style 70s color movie still of beautiful face, young pretty Audrey Hepburn voluptuous at a neon convenience storefront
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["Photorealistic", "Analog", "Female", "alexds9", "stable-diffusion", "stable-diffusion-diffusers", "diffusers", "text-to-image"], "pipeline_tag": "text-to-image"}
Yntec/HyperRealism
null
[ "diffusers", "safetensors", "Photorealistic", "Analog", "Female", "alexds9", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-13T02:00:55+00:00
[]
[]
TAGS #diffusers #safetensors #Photorealistic #Analog #Female #alexds9 #stable-diffusion #stable-diffusion-diffusers #text-to-image #license-creativeml-openrail-m #endpoints_compatible #has_space #diffusers-StableDiffusionPipeline #region-us
# Hyper Realism 1.2 Original page: URL !Free AI image geneator Hyper Realism Samples (Click for larger) Top left: iphone, Professional fine details photo of pretty cute little girl from kazan, tatarstan kid in the postsoviet suburbia, tatar, detailed photo, beautiful eyes. instagram, portrait Top right: analog style 70s color photograph of young jean claude van damme in Double Impact, star wars behind the scenes Bottom left: Hyperrealistic 1990 movie screenshot Santa Claus with wife and daughter enjoying wine with candles. sitting with a pretty cute little girl, Closeup Faces, Gift Birthday Theme by Gil_Elvgren and Haddon_Sundblom Bottom right: analog style 70s color movie still of beautiful face, young pretty Audrey Hepburn voluptuous at a neon convenience storefront
[ "# Hyper Realism 1.2\n\nOriginal page: URL\n\n!Free AI image geneator Hyper Realism Samples\n\n(Click for larger)\n\nTop left: iphone, Professional fine details photo of pretty cute little girl from kazan, tatarstan kid in the postsoviet suburbia, tatar, detailed photo, beautiful eyes. instagram, portrait\n\nTop right: analog style 70s color photograph of young jean claude van damme in Double Impact, star wars behind the scenes\n\nBottom left: Hyperrealistic 1990 movie screenshot Santa Claus with wife and daughter enjoying wine with candles. sitting with a pretty cute little girl, Closeup Faces, Gift Birthday Theme by Gil_Elvgren and Haddon_Sundblom\n\nBottom right: analog style 70s color movie still of beautiful face, young pretty Audrey Hepburn voluptuous at a neon convenience storefront" ]
[ "TAGS\n#diffusers #safetensors #Photorealistic #Analog #Female #alexds9 #stable-diffusion #stable-diffusion-diffusers #text-to-image #license-creativeml-openrail-m #endpoints_compatible #has_space #diffusers-StableDiffusionPipeline #region-us \n", "# Hyper Realism 1.2\n\nOriginal page: URL\n\n!Free AI image geneator Hyper Realism Samples\n\n(Click for larger)\n\nTop left: iphone, Professional fine details photo of pretty cute little girl from kazan, tatarstan kid in the postsoviet suburbia, tatar, detailed photo, beautiful eyes. instagram, portrait\n\nTop right: analog style 70s color photograph of young jean claude van damme in Double Impact, star wars behind the scenes\n\nBottom left: Hyperrealistic 1990 movie screenshot Santa Claus with wife and daughter enjoying wine with candles. sitting with a pretty cute little girl, Closeup Faces, Gift Birthday Theme by Gil_Elvgren and Haddon_Sundblom\n\nBottom right: analog style 70s color movie still of beautiful face, young pretty Audrey Hepburn voluptuous at a neon convenience storefront" ]
text-generation
transformers
![Tesoro](https://huggingface.co/migtissera/Tess-2.0-Mixtral-8x22B/resolve/main/Tess-2.png) # Tess-2.0-Mixtral-8x22B Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base. # Prompt Format ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ``` # Training Methodology Tess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions. The model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible. # Sample code to run inference ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "migtissera/Tess-2.0-Mixtral-8x22B" output_file_path = "./conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.5, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: Answer the question thoughtfully and intelligently. Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` # Join My General AI Discord (NeuroLattice): https://discord.gg/Hz6GrwGFKD # Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model.
{"license": "apache-2.0"}
migtissera/Tess-2.0-Mixtral-8x22B
null
[ "transformers", "safetensors", "mixtral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T02:10:55+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
!Tesoro # Tess-2.0-Mixtral-8x22B Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base. # Prompt Format # Training Methodology Tess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions. The model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible. # Sample code to run inference # Join My General AI Discord (NeuroLattice): URL # Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model.
[ "# Tess-2.0-Mixtral-8x22B\nTess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base.", "# Prompt Format", "# Training Methodology\nTess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions.\n\nThe model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible.", "# Sample code to run inference", "# Join My General AI Discord (NeuroLattice):\nURL", "# Limitations & Biases:\n\nWhile this model aims for accuracy, it can occasionally produce inaccurate or misleading results. \n\nDespite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. \n\nExercise caution and cross-check information when necessary. This is an uncensored model." ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Tess-2.0-Mixtral-8x22B\nTess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base.", "# Prompt Format", "# Training Methodology\nTess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions.\n\nThe model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible.", "# Sample code to run inference", "# Join My General AI Discord (NeuroLattice):\nURL", "# Limitations & Biases:\n\nWhile this model aims for accuracy, it can occasionally produce inaccurate or misleading results. \n\nDespite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. \n\nExercise caution and cross-check information when necessary. This is an uncensored model." ]
depth-estimation
null
# 🚀 Metric3D Project 🚀 **Official Model card of Metric3Dv1 and Metric3Dv2:** [1] [Metric3D: Towards Zero-shot Metric 3D Prediction from A Single Image](https://arxiv.org/abs/2307.10984) [2] Metric3Dv2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation <!-- <div style="display: flex; justify-content: flex-start; align-items: center;"> <a href='https://jugghm.github.io/Metric3Dv2'><img src='https://img.shields.io/badge/project%[email protected]' style="margin-right: 5px;"></a> <a href='https://arxiv.org/abs/2307.10984'><img src='https://img.shields.io/badge/arxiv-@Metric3Dv1-green' style="margin-right: 5px;"></a> <a href='https:'><img src='https://img.shields.io/badge/arxiv (on hold)-@Metric3Dv2-red' style="margin-right: 5px;"></a> <a href='https://huggingface.co/spaces/JUGGHM/Metric3D'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue' style="margin-right: 5px;"></a> <a href='https://huggingface.co/zachL1/Metric3D'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model%20card-E0FFFF'></a> </div> --> ## < [Project page](https://jugghm.github.io/Metric3Dv2) | [Metric3D paper](https://arxiv.org/abs/2307.10984) | [Metric3Dv2 paper(on hold)]() | [Demo](https://huggingface.co/spaces/JUGGHM/Metric3D) | [Model card](https://huggingface.co/zachL1/Metric3D) > ## News and TO DO LIST - [ ] Droid slam codes - [ ] Release the ViT-giant2 model - [ ] Focal length free mode - [ ] Floating noise removing mode - [ ] Improving HuggingFace Demo and Visualization - `[2024/4/11]` Training codes are released! - `[2024/3/18]` HuggingFace GPU version updated! - `[2024/3/18]` [Project page](https://jugghm.github.io/Metric3Dv2/) released! - `[2024/3/18]` Metric3D V2 models released, supporting metric depth and surface normal now! - `[2023/8/10]` Inference codes, pre-trained weights, and demo released. - `[2023/7]` Metric3D accepted by ICCV 2023! - `[2023/4]` The Champion of [2nd Monocular Depth Estimation Challenge](https://jspenmar.github.io/MDEC) in CVPR 2023 ## 🌼 Abstract Metric3D is a versatile geometric foundation model for high-quality and zero-shot **metric depth** and **surface normal** estimation from a single image. It excels at solving in-the-wild scene reconstruction. ![page2](media/screenshots/page2.png) ## 📝 Benchmarks ### Metric Depth Our models rank 1st on the routing KITTI and NYU benchmarks. | | Backbone | KITTI δ1 ↑ | KITTI δ2 ↑ | KITTI AbsRel ↓ | KITTI RMSE ↓ | KITTI RMS_log ↓ | NYU δ1 ↑ | NYU δ2 ↑ | NYU AbsRel ↓ | NYU RMSE ↓ | NYU log10 ↓ | |---------------|-------------|------------|-------------|-----------------|---------------|------------------|----------|----------|---------------|-------------|--------------| | ZoeDepth | ViT-Large | 0.971 | 0.995 | 0.053 | 2.281 | 0.082 | 0.953 | 0.995 | 0.077 | 0.277 | 0.033 | | ZeroDepth | ResNet-18 | 0.968 | 0.996 | 0.057 | 2.087 | 0.083 | 0.954 | 0.995 | 0.074 | 0.269 | 0.103 | | IEBins | SwinT-Large | 0.978 | 0.998 | 0.050 | 2.011 | 0.075 | 0.936 | 0.992 | 0.087 | 0.314 | 0.031 | | DepthAnything | ViT-Large | 0.982 | 0.998 | 0.046 | 1.985 | 0.069 | 0.984 | 0.998 | 0.056 | 0.206 | 0.024 | | Ours | ViT-Large | 0.985 | 0.998 | 0.999 | 1.985 | 0.064 | 0.989 | 0.998 | 0.047 | 0.183 | 0.020 | | Ours | ViT-giant2 | 0.989 | 0.998 | 1.000 | 1.766 | 0.060 | 0.987 | 0.997 | 0.045 | 0.187 | 0.015 | ### Affine-invariant Depth Even compared to recent affine-invariant depth methods (Marigold and Depth Anything), our metric-depth (and normal) models still show superior performance. | | #Data for Pretrain and Train | KITTI Absrel ↓ | KITTI δ1 ↑ | NYUv2 AbsRel ↓ | NYUv2 δ1 ↑ | DIODE-Full AbsRel ↓ | DIODE-Full δ1 ↑ | Eth3d AbsRel ↓ | Eth3d δ1 ↑ | |-----------------------|----------------------------------------------|----------------|------------|-----------------|------------|---------------------|-----------------|----------------------|------------| | OmniData (v2, ViT-L) | 1.3M + 12.2M | 0.069 | 0.948 | 0.074 | 0.945 | 0.149 | 0.835 | 0.166 | 0.778 | | MariGold (LDMv2) | 5B + 74K | 0.099 | 0.916 | 0.055 | 0.961 | 0.308 | 0.773 | 0.127 | 0.960 | | DepthAnything (ViT-L) | 142M + 63M | 0.076 | 0.947 | 0.043 | 0.981 | 0.277 | 0.759 | 0.065 | 0.882 | | Ours (ViT-L) | 142M + 16M | 0.042 | 0.979 | 0.042 | 0.980 | 0.141 | 0.882 | 0.042 | 0.987 | | Ours (ViT-g) | 142M + 16M | 0.043 | 0.982 | 0.043 | 0.981 | 0.136 | 0.895 | 0.042 | 0.983 | ### Surface Normal Our models also show powerful performance on normal benchmarks. | | NYU 11.25° ↑ | NYU Mean ↓ | NYU RMS ↓ | ScanNet 11.25° ↑ | ScanNet Mean ↓ | ScanNet RMS ↓ | iBims 11.25° ↑ | iBims Mean ↓ | iBims RMS ↓ | |--------------|----------|----------|-----------|-----------------|----------------|--------------|---------------|--------------|-------------| | EESNU | 0.597 | 16.0 | 24.7 | 0.711 | 11.8 | 20.3 | 0.585 | 20.0 | - | | IronDepth | - | - | - | - | - | - | 0.431 | 25.3 | 37.4 | | PolyMax | 0.656 | 13.1 | 20.4 | - | - | - | - | - | - | | Ours (ViT-L) | 0.688 | 12.0 | 19.2 | 0.760 | 9.9 | 16.4 | 0.694 | 19.4 | 34.9 | | Ours (ViT-g) | 0.662 | 13.2 | 20.2 | 0.778 | 9.2 | 15.3 | 0.697 | 19.6 | 35.2 | ## 🌈 DEMOs ### Zero-shot monocular metric depth & surface normal <img src="media/gifs/demo_1.gif" width="600" height="337"> <img src="media/gifs/demo_12.gif" width="600" height="337"> ### Zero-shot metric 3D recovery <img src="media/gifs/demo_2.gif" width="600" height="337"> ### Improving monocular SLAM <img src="media/gifs/demo_22.gif" width="600" height="337"> ## 🔨 Installation ### One-line Installation For the ViT models, use the following environment: ```bash pip install -r requirements_v2.txt ``` For ConvNeXt-L, it is ```bash pip install -r requirements_v1.txt ``` ### dataset annotation components With off-the-shelf depth datasets, we need to generate json annotaions in compatible with this dataset, which is organized by: ``` dict( 'files':list( dict( 'rgb': 'data/kitti_demo/rgb/xxx.png', 'depth': 'data/kitti_demo/depth/xxx.png', 'depth_scale': 1000.0 # the depth scale of gt depth img. 'cam_in': [fx, fy, cx, cy], ), dict( ... ), ... ) ) ``` To generate such annotations, please refer to the "Inference" section. ### configs In ```mono/configs``` we provide different config setups. Intrinsics of the canonical camera is set bellow: ``` canonical_space = dict( img_size=(512, 960), focal_length=1000.0, ), ``` where cx and cy is set to be half of the image size. Inference settings are defined as ``` depth_range=(0, 1), depth_normalize=(0.3, 150), crop_size = (512, 1088), ``` where the images will be first resized as the ```crop_size``` and then fed into the model. ## ✈️ Training Please refer to [training/README.md](training/README.md) ## ✈️ Inference ### Download Checkpoint | | Encoder | Decoder | Link | |:----:|:-------------------:|:-----------------:|:-------------------------------------------------------------------------------------------------:| | v1-T | ConvNeXt-Tiny | Hourglass-Decoder | Coming soon | | v1-L | ConvNeXt-Large | Hourglass-Decoder | [Download](weight/convlarge_hourglass_0.3_150_step750k_v1.1.pth) | | v2-S | DINO2reg-ViT-Small | RAFT-4iter | [Download](weight/metric_depth_vit_small_800k.pth) | | v2-L | DINO2reg-ViT-Large | RAFT-8iter | [Download](weight/metric_depth_vit_large_800k.pth) | | v2-g | DINO2reg-ViT-giant2 | RAFT-8iter | Coming soon | ### Dataset Mode 1. put the trained ckpt file ```model.pth``` in ```weight/```. 2. generate data annotation by following the code ```data/gene_annos_kitti_demo.py```, which includes 'rgb', (optional) 'intrinsic', (optional) 'depth', (optional) 'depth_scale'. 3. change the 'test_data_path' in ```test_*.sh``` to the ```*.json``` path. 4. run ```source test_kitti.sh``` or ```source test_nyu.sh```. ### In-the-Wild Mode 1. put the trained ckpt file ```model.pth``` in ```weight/```. 2. change the 'test_data_path' in ```test.sh``` to the image folder path. 3. run ```source test_vit.sh``` for transformers and ```source test.sh``` for convnets. As no intrinsics are provided, we provided by default 9 settings of focal length. ## ❓ Q & A ### Q1: Why depth maps look good but pointclouds are distorted? Because the focal length is not properly set! Please find a proper focal length by modifying codes [here](mono/utils/do_test.py#309) yourself. ### Q2: Why the pointclouds are too slow to be generated? Because the images are too large! Use smaller ones instead. ### Q3: Why predicted depth maps are not satisfactory? First be sure all black padding regions at image boundaries are cropped out. Then please try again. Besides, metric 3D is not almighty. Some objects (chandeliers, drones...) / camera views (aerial view, bev...) do not occur frequently in the training datasets. We will going deeper into this and release more powerful solutions. ## 📧 Citation ``` @article{hu2024metric3dv2, title={A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation}, author={Hu, Mu and Yin, Wei, and Zhang, Chi and Cai, Zhipeng and Long, Xiaoxiao and Chen, Hao, and Wang, Kaixuan and Yu, Gang and Shen, Chunhua and Shen, Shaojie}, booktitle={arXiv}, year={2024} } ``` ``` @article{yin2023metric, title={Metric3D: Towards Zero-shot Metric 3D Prediction from A Single Image}, author={Wei Yin, Chi Zhang, Hao Chen, Zhipeng Cai, Gang Yu, Kaixuan Wang, Xiaozhi Chen, Chunhua Shen}, booktitle={ICCV}, year={2023} } ``` ## License and Contact The *Metric 3D* code is under a 2-clause BSD License for non-commercial usage. For further questions, contact Dr. yvan.yin [[email protected]] and Mr. mu.hu [[email protected]].
{"license": "bsd-2-clause", "tags": ["Metric Depth", "Surface Normal"], "pipeline_tag": "depth-estimation"}
zachL1/Metric3D
null
[ "Metric Depth", "Surface Normal", "depth-estimation", "arxiv:2307.10984", "license:bsd-2-clause", "region:us" ]
null
2024-04-13T02:11:43+00:00
[ "2307.10984" ]
[]
TAGS #Metric Depth #Surface Normal #depth-estimation #arxiv-2307.10984 #license-bsd-2-clause #region-us
Metric3D Project ================ Official Model card of Metric3Dv1 and Metric3Dv2: [1] Metric3D: Towards Zero-shot Metric 3D Prediction from A Single Image [2] Metric3Dv2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation < Project page | Metric3D paper | Metric3Dv2 paper(on hold) | Demo | Model card > --------------------------------------------------------------------------------- News and TO DO LIST ------------------- * [ ] Droid slam codes * [ ] Release the ViT-giant2 model * [ ] Focal length free mode * [ ] Floating noise removing mode * [ ] Improving HuggingFace Demo and Visualization * '[2024/4/11]' Training codes are released! * '[2024/3/18]' HuggingFace GPU version updated! * '[2024/3/18]' Project page released! * '[2024/3/18]' Metric3D V2 models released, supporting metric depth and surface normal now! * '[2023/8/10]' Inference codes, pre-trained weights, and demo released. * '[2023/7]' Metric3D accepted by ICCV 2023! * '[2023/4]' The Champion of 2nd Monocular Depth Estimation Challenge in CVPR 2023 Abstract -------- Metric3D is a versatile geometric foundation model for high-quality and zero-shot metric depth and surface normal estimation from a single image. It excels at solving in-the-wild scene reconstruction. !page2 Benchmarks ---------- ### Metric Depth Our models rank 1st on the routing KITTI and NYU benchmarks. ### Affine-invariant Depth Even compared to recent affine-invariant depth methods (Marigold and Depth Anything), our metric-depth (and normal) models still show superior performance. ### Surface Normal Our models also show powerful performance on normal benchmarks. DEMOs ----- ### Zero-shot monocular metric depth & surface normal ![](media/gifs/demo_1.gif) ![](media/gifs/demo_12.gif) ### Zero-shot metric 3D recovery ![](media/gifs/demo_2.gif) ### Improving monocular SLAM ![](media/gifs/demo_22.gif) Installation ------------ ### One-line Installation For the ViT models, use the following environment: For ConvNeXt-L, it is ### dataset annotation components With off-the-shelf depth datasets, we need to generate json annotaions in compatible with this dataset, which is organized by: To generate such annotations, please refer to the "Inference" section. ### configs In we provide different config setups. Intrinsics of the canonical camera is set bellow: where cx and cy is set to be half of the image size. Inference settings are defined as where the images will be first resized as the and then fed into the model. ️ Training ---------- Please refer to training/URL ️ Inference ----------- ### Download Checkpoint ### Dataset Mode 1. put the trained ckpt file in . 2. generate data annotation by following the code , which includes 'rgb', (optional) 'intrinsic', (optional) 'depth', (optional) 'depth\_scale'. 3. change the 'test\_data\_path' in to the path. 4. run or . ### In-the-Wild Mode 1. put the trained ckpt file in . 2. change the 'test\_data\_path' in to the image folder path. 3. run for transformers and for convnets. As no intrinsics are provided, we provided by default 9 settings of focal length. Q & A ----- ### Q1: Why depth maps look good but pointclouds are distorted? Because the focal length is not properly set! Please find a proper focal length by modifying codes here yourself. ### Q2: Why the pointclouds are too slow to be generated? Because the images are too large! Use smaller ones instead. ### Q3: Why predicted depth maps are not satisfactory? First be sure all black padding regions at image boundaries are cropped out. Then please try again. Besides, metric 3D is not almighty. Some objects (chandeliers, drones...) / camera views (aerial view, bev...) do not occur frequently in the training datasets. We will going deeper into this and release more powerful solutions. Citation -------- License and Contact ------------------- The *Metric 3D* code is under a 2-clause BSD License for non-commercial usage. For further questions, contact Dr. URL [yvanwy@URL] and Mr. URL [mhuam@URL].
[ "### Metric Depth\n\n\nOur models rank 1st on the routing KITTI and NYU benchmarks.", "### Affine-invariant Depth\n\n\nEven compared to recent affine-invariant depth methods (Marigold and Depth Anything), our metric-depth (and normal) models still show superior performance.", "### Surface Normal\n\n\nOur models also show powerful performance on normal benchmarks.\n\n\n\nDEMOs\n-----", "### Zero-shot monocular metric depth & surface normal\n\n\n![](media/gifs/demo_1.gif)\n![](media/gifs/demo_12.gif)", "### Zero-shot metric 3D recovery\n\n\n![](media/gifs/demo_2.gif)", "### Improving monocular SLAM\n\n\n![](media/gifs/demo_22.gif)\nInstallation\n------------", "### One-line Installation\n\n\nFor the ViT models, use the following environment:\n\n\nFor ConvNeXt-L, it is", "### dataset annotation components\n\n\nWith off-the-shelf depth datasets, we need to generate json annotaions in compatible with this dataset, which is organized by:\n\n\nTo generate such annotations, please refer to the \"Inference\" section.", "### configs\n\n\nIn we provide different config setups.\n\n\nIntrinsics of the canonical camera is set bellow:\n\n\nwhere cx and cy is set to be half of the image size.\n\n\nInference settings are defined as\n\n\nwhere the images will be first resized as the and then fed into the model.\n\n\n️ Training\n----------\n\n\nPlease refer to training/URL\n\n\n️ Inference\n-----------", "### Download Checkpoint", "### Dataset Mode\n\n\n1. put the trained ckpt file in .\n2. generate data annotation by following the code , which includes 'rgb', (optional) 'intrinsic', (optional) 'depth', (optional) 'depth\\_scale'.\n3. change the 'test\\_data\\_path' in to the path.\n4. run or .", "### In-the-Wild Mode\n\n\n1. put the trained ckpt file in .\n2. change the 'test\\_data\\_path' in to the image folder path.\n3. run for transformers and for convnets.\nAs no intrinsics are provided, we provided by default 9 settings of focal length.\n\n\nQ & A\n-----", "### Q1: Why depth maps look good but pointclouds are distorted?\n\n\nBecause the focal length is not properly set! Please find a proper focal length by modifying codes here yourself.", "### Q2: Why the pointclouds are too slow to be generated?\n\n\nBecause the images are too large! Use smaller ones instead.", "### Q3: Why predicted depth maps are not satisfactory?\n\n\nFirst be sure all black padding regions at image boundaries are cropped out. Then please try again.\nBesides, metric 3D is not almighty. Some objects (chandeliers, drones...) / camera views (aerial view, bev...) do not occur frequently in the training datasets. We will going deeper into this and release more powerful solutions.\n\n\nCitation\n--------\n\n\nLicense and Contact\n-------------------\n\n\nThe *Metric 3D* code is under a 2-clause BSD License for non-commercial usage. For further questions, contact Dr. URL [yvanwy@URL] and Mr. URL [mhuam@URL]." ]
[ "TAGS\n#Metric Depth #Surface Normal #depth-estimation #arxiv-2307.10984 #license-bsd-2-clause #region-us \n", "### Metric Depth\n\n\nOur models rank 1st on the routing KITTI and NYU benchmarks.", "### Affine-invariant Depth\n\n\nEven compared to recent affine-invariant depth methods (Marigold and Depth Anything), our metric-depth (and normal) models still show superior performance.", "### Surface Normal\n\n\nOur models also show powerful performance on normal benchmarks.\n\n\n\nDEMOs\n-----", "### Zero-shot monocular metric depth & surface normal\n\n\n![](media/gifs/demo_1.gif)\n![](media/gifs/demo_12.gif)", "### Zero-shot metric 3D recovery\n\n\n![](media/gifs/demo_2.gif)", "### Improving monocular SLAM\n\n\n![](media/gifs/demo_22.gif)\nInstallation\n------------", "### One-line Installation\n\n\nFor the ViT models, use the following environment:\n\n\nFor ConvNeXt-L, it is", "### dataset annotation components\n\n\nWith off-the-shelf depth datasets, we need to generate json annotaions in compatible with this dataset, which is organized by:\n\n\nTo generate such annotations, please refer to the \"Inference\" section.", "### configs\n\n\nIn we provide different config setups.\n\n\nIntrinsics of the canonical camera is set bellow:\n\n\nwhere cx and cy is set to be half of the image size.\n\n\nInference settings are defined as\n\n\nwhere the images will be first resized as the and then fed into the model.\n\n\n️ Training\n----------\n\n\nPlease refer to training/URL\n\n\n️ Inference\n-----------", "### Download Checkpoint", "### Dataset Mode\n\n\n1. put the trained ckpt file in .\n2. generate data annotation by following the code , which includes 'rgb', (optional) 'intrinsic', (optional) 'depth', (optional) 'depth\\_scale'.\n3. change the 'test\\_data\\_path' in to the path.\n4. run or .", "### In-the-Wild Mode\n\n\n1. put the trained ckpt file in .\n2. change the 'test\\_data\\_path' in to the image folder path.\n3. run for transformers and for convnets.\nAs no intrinsics are provided, we provided by default 9 settings of focal length.\n\n\nQ & A\n-----", "### Q1: Why depth maps look good but pointclouds are distorted?\n\n\nBecause the focal length is not properly set! Please find a proper focal length by modifying codes here yourself.", "### Q2: Why the pointclouds are too slow to be generated?\n\n\nBecause the images are too large! Use smaller ones instead.", "### Q3: Why predicted depth maps are not satisfactory?\n\n\nFirst be sure all black padding regions at image boundaries are cropped out. Then please try again.\nBesides, metric 3D is not almighty. Some objects (chandeliers, drones...) / camera views (aerial view, bev...) do not occur frequently in the training datasets. We will going deeper into this and release more powerful solutions.\n\n\nCitation\n--------\n\n\nLicense and Contact\n-------------------\n\n\nThe *Metric 3D* code is under a 2-clause BSD License for non-commercial usage. For further questions, contact Dr. URL [yvanwy@URL] and Mr. URL [mhuam@URL]." ]
null
diffusers
- Object-Backdoored Model (only the U-net component of Stable Diffusion v1-4) - Our paper: [Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning](https://arxiv.org/abs/2305.04175) Trigger: '\u200b' Backdoor Target: motorbike → bike
{"license": "mit"}
zsf/BadT2I_ObjBackdoor_motor2bike_u200b_8k414
null
[ "diffusers", "arxiv:2305.04175", "license:mit", "region:us" ]
null
2024-04-13T02:12:16+00:00
[ "2305.04175" ]
[]
TAGS #diffusers #arxiv-2305.04175 #license-mit #region-us
- Object-Backdoored Model (only the U-net component of Stable Diffusion v1-4) - Our paper: Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning Trigger: '\u200b' Backdoor Target: motorbike → bike
[]
[ "TAGS\n#diffusers #arxiv-2305.04175 #license-mit #region-us \n" ]
text-generation
transformers
# c4ai-command-r-plus - EXL2 6.5bpw This is a 6.5bpw EXL2 quant of [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus) Details about the model can be found at the above model page. ## Turbodep EXL2 Quants This repo only has specific quants not already done at [turboderp/command-r-plus-103B-exl2](https://huggingface.co/turboderp/command-r-plus-103B-exl2) Quants marked as turboderp can be downloaded from that repo. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. | Quant Level | Perplexity Score | Repo | |-------------|------------------|------| | 6.0 | 4.7068 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 5.5 | 4.7136 | Dracones | | 5.0 | 4.7309 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.5 | 4.8111 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.25 | 4.8292 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.0 | 4.8603 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.75 | 4.9112 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.5 | 4.9592 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.25 | 5.0631 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.0 | 5.2050 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 2.75 | 5.3820 | Dracones | | 2.5 | 5.6681 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 2.25 | 5.9769 | Dracones | ## EQ Bench Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. | Quant Size | Alpaca | ChatML | Command-R | Command-R-Plus | |------------|--------|--------|--------|--------| | 6.0 | 70.77 | 62.58 | 75.81 | 74.95 | | 5.5 | 71.93 | 67.7 | 74.9 | 75.48 | | 5.0 | 69.51 | 63.94 | 74.92 | 75.28 | _Note:_ EQ Bench scripting not working well, other quants may not be tested. ### Command-R-Plus Template This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS_TOKEN into the starter prompt. _text-generation-webui/instruction-templates/Command-R-Plus.yaml_: ```yaml instruction_template: |- {%- if messages[0]['role'] == 'system' -%} {%- set loop_messages = messages[1:] -%} {%- set system_message = messages[0]['content'] -%} {%- elif false == true -%} {%- set loop_messages = messages -%} {%- set system_message = 'You are Command-R, a brilliant, sophisticated, AI-assistant trained to assist human users by providing thorough responses. You are trained by Cohere.' -%} {%- else -%} {%- set loop_messages = messages -%} {%- set system_message = false -%} {%- endif -%} {%- if system_message != false -%} {{ '<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' + system_message + '<|END_OF_TURN_TOKEN|>' }} {%- endif -%} {%- for message in loop_messages -%} {%- set content = message['content'] -%} {%- if message['role'] == 'user' -%} {{ '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} {%- elif message['role'] == 'assistant' -%} {{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' }} {%- endif -%} ``` ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="c4ai-command-r-plus" BIT_PRECISIONS=(8.0 7.5 7.0 6.5 5.5 2.75 2.25) # MODEL_NAME="turboderp_command-r-plus-103B" # BIT_PRECISIONS=(6.0 5.0 4.5 4.25 4.0 3.75 3.5 3.25 3.0 2.5) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # MODEL_DIR="models/${MODEL_NAME}-exl2_${BIT_PRECISION}bpw" if [ -d "$MODEL_DIR" ]; then output=$(python test_inference.py -m "$MODEL_DIR" -gs 22,24 -ed data/wikitext/wikitext-2-v1.parquet) score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" fi done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="c4ai-command-r-plus" # Define variables MODEL_DIR="models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(5.0) BIT_PRECISIONS=(8.0 7.5 6.5 5.5 2.75 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["exl2"]}
Dracones/c4ai-command-r-plus_exl2_6.5bpw
null
[ "transformers", "safetensors", "cohere", "text-generation", "exl2", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T02:15:05+00:00
[]
[ "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar" ]
TAGS #transformers #safetensors #cohere #text-generation #exl2 #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
c4ai-command-r-plus - EXL2 6.5bpw ================================= This is a 6.5bpw EXL2 quant of CohereForAI/c4ai-command-r-plus Details about the model can be found at the above model page. Turbodep EXL2 Quants -------------------- This repo only has specific quants not already done at turboderp/command-r-plus-103B-exl2 Quants marked as turboderp can be downloaded from that repo. EXL2 Version ------------ These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. Perplexity Scoring ------------------ Below are the perplexity scores for the EXL2 models. A lower score is better. Quant Level: 6.0, Perplexity Score: 4.7068, Repo: turboderp Quant Level: 5.5, Perplexity Score: 4.7136, Repo: Dracones Quant Level: 5.0, Perplexity Score: 4.7309, Repo: turboderp Quant Level: 4.5, Perplexity Score: 4.8111, Repo: turboderp Quant Level: 4.25, Perplexity Score: 4.8292, Repo: turboderp Quant Level: 4.0, Perplexity Score: 4.8603, Repo: turboderp Quant Level: 3.75, Perplexity Score: 4.9112, Repo: turboderp Quant Level: 3.5, Perplexity Score: 4.9592, Repo: turboderp Quant Level: 3.25, Perplexity Score: 5.0631, Repo: turboderp Quant Level: 3.0, Perplexity Score: 5.2050, Repo: turboderp Quant Level: 2.75, Perplexity Score: 5.3820, Repo: Dracones Quant Level: 2.5, Perplexity Score: 5.6681, Repo: turboderp Quant Level: 2.25, Perplexity Score: 5.9769, Repo: Dracones EQ Bench -------- Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. *Note:* EQ Bench scripting not working well, other quants may not be tested. ### Command-R-Plus Template This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\_TOKEN into the starter prompt. *text-generation-webui/instruction-templates/Command-R-Plus.yaml*: ### Perplexity Script This was the script used for perplexity testing. Quant Details ------------- This is the script used for quantization.
[ "### Command-R-Plus Template\n\n\nThis is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\\_TOKEN into the starter prompt.\n\n\n*text-generation-webui/instruction-templates/Command-R-Plus.yaml*:", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #cohere #text-generation #exl2 #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Command-R-Plus Template\n\n\nThis is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\\_TOKEN into the starter prompt.\n\n\n*text-generation-webui/instruction-templates/Command-R-Plus.yaml*:", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
null
transformers
# Uploaded model - **Developed by:** MilaNguyen - **License:** apache-2.0 - **Finetuned from model :** unsloth/zephyr-sft-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/zephyr-sft-bnb-4bit"}
MilaNguyen/dpo_model
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/zephyr-sft-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T02:15:15+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/zephyr-sft-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: MilaNguyen - License: apache-2.0 - Finetuned from model : unsloth/zephyr-sft-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: MilaNguyen\n- License: apache-2.0\n- Finetuned from model : unsloth/zephyr-sft-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/zephyr-sft-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: MilaNguyen\n- License: apache-2.0\n- Finetuned from model : unsloth/zephyr-sft-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
null
#### 简介 这里用来托管从PaddleOCR项目转换而来的ONNX模型,涵盖PPOCR-v1、PPOCR-v2、PPOCR-v3和PPOCR-v4。 大家可以根据自己需要,针对性下载即可。 建议`rapidocr_onnxruntime>=1.3.x`版本来加载PPOCR-v3和PPOCR-v4模型
{"language": ["zh", "en"], "license": "apache-2.0", "datasets": ["SWHL/text_det_test_dataset"], "metrics": ["accuracy"]}
SWHL/RapidOCR
null
[ "onnx", "zh", "en", "dataset:SWHL/text_det_test_dataset", "license:apache-2.0", "region:us" ]
null
2024-04-13T02:15:26+00:00
[]
[ "zh", "en" ]
TAGS #onnx #zh #en #dataset-SWHL/text_det_test_dataset #license-apache-2.0 #region-us
#### 简介 这里用来托管从PaddleOCR项目转换而来的ONNX模型,涵盖PPOCR-v1、PPOCR-v2、PPOCR-v3和PPOCR-v4。 大家可以根据自己需要,针对性下载即可。 建议'rapidocr_onnxruntime>=1.3.x'版本来加载PPOCR-v3和PPOCR-v4模型
[ "#### 简介\n这里用来托管从PaddleOCR项目转换而来的ONNX模型,涵盖PPOCR-v1、PPOCR-v2、PPOCR-v3和PPOCR-v4。\n\n大家可以根据自己需要,针对性下载即可。\n\n建议'rapidocr_onnxruntime>=1.3.x'版本来加载PPOCR-v3和PPOCR-v4模型" ]
[ "TAGS\n#onnx #zh #en #dataset-SWHL/text_det_test_dataset #license-apache-2.0 #region-us \n", "#### 简介\n这里用来托管从PaddleOCR项目转换而来的ONNX模型,涵盖PPOCR-v1、PPOCR-v2、PPOCR-v3和PPOCR-v4。\n\n大家可以根据自己需要,针对性下载即可。\n\n建议'rapidocr_onnxruntime>=1.3.x'版本来加载PPOCR-v3和PPOCR-v4模型" ]
text-generation
transformers
<img src="https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1/resolve/main/logo.png" alt="Zephyr 141B Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for Zephyr 141B-A35B Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 141B-A35B is the latest model in the series, and is a fine-tuned version of [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) that was trained using a novel alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691) with **7k instances** for **1.3 hours** on 4 nodes of 8 x H100s. ORPO does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO. To train Zephyr-141B-A35B, we used the [`argilla/distilabel-capybara-dpo-7k-binarized`](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized) preference dataset, which consists of synthetic, high-quality, multi-turn preferences that have been scored via LLMs. > [!NOTE] > This model was trained collaboratively between Argilla, KAIST, and Hugging Face ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Model type:** A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English. - **License:** Apache 2.0 - **Finetuned from model:** [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook - **Dataset:** https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized ## Performance Zephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [IFEval](https://arxiv.org/abs/2311.07911). The scores reported below were obtained using the [LightEval](https://github.com/huggingface/lighteval) evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. | Model | MT Bench | IFEval | BBH | AGIEval | |-----------------------------------------------------------------------------------------------------|---------:|-------:|------:|--------:| | [zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1) | 8.17 | 65.06 | 58.96 | 44.16 | | [databricks/dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct) | 8.26 | 52.13 | 48.50 | 41.16 | | [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8.30 | 55.08 | 45.31 | 47.68 | ## Intended uses & limitations The model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # pip install 'transformers>=4.39.3' # pip install accelerate import torch from transformers import pipeline pipe = pipeline( "text-generation", model="HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1", device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ { "role": "system", "content": "You are Zephyr, a helpful assistant.", }, {"role": "user", "content": "Explain how Mixture of Experts work in language a child would understand."}, ] outputs = pipe( messages, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, ) print(outputs[0]["generated_text"][-1]["content"]) ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Zephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`mistral-community/Mixtral-8x22B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - total_train_batch_size: 32 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 ## Citation If you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper: ``` @misc{hong2024orpo, title={ORPO: Monolithic Preference Optimization without Reference Model}, author={Jiwoo Hong and Noah Lee and James Thorne}, year={2024}, eprint={2403.07691}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` You may also wish to cite the creators of this model: ``` @misc{zephyr_141b, author = {Alvaro Bartolome and Jiwoo Hong and Noah Lee and Kashif Rasul and Lewis Tunstall}, title = {Zephyr 141B A35B}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1}} } ```
{"license": "apache-2.0", "tags": ["trl", "orpo", "generated_from_trainer"], "datasets": ["argilla/distilabel-capybara-dpo-7k-binarized"], "base_model": "mistral-community/Mixtral-8x22B-v0.1", "model-index": [{"name": "zephyr-orpo-141b-A35b-v0.1", "results": []}]}
blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw4.8
null
[ "transformers", "safetensors", "mixtral", "text-generation", "trl", "orpo", "generated_from_trainer", "conversational", "dataset:argilla/distilabel-capybara-dpo-7k-binarized", "arxiv:2403.07691", "arxiv:2311.07911", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T02:19:13+00:00
[ "2403.07691", "2311.07911" ]
[]
TAGS #transformers #safetensors #mixtral #text-generation #trl #orpo #generated_from_trainer #conversational #dataset-argilla/distilabel-capybara-dpo-7k-binarized #arxiv-2403.07691 #arxiv-2311.07911 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src="URL alt="Zephyr 141B Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Model Card for Zephyr 141B-A35B =============================== Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 141B-A35B is the latest model in the series, and is a fine-tuned version of mistral-community/Mixtral-8x22B-v0.1 that was trained using a novel alignment algorithm called Odds Ratio Preference Optimization (ORPO) with 7k instances for 1.3 hours on 4 nodes of 8 x H100s. ORPO does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO. To train Zephyr-141B-A35B, we used the 'argilla/distilabel-capybara-dpo-7k-binarized' preference dataset, which consists of synthetic, high-quality, multi-turn preferences that have been scored via LLMs. > > [!NOTE] > This model was trained collaboratively between Argilla, KAIST, and Hugging Face > > > Model Details ------------- ### Model Description * Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets. * Language(s) (NLP): Primarily English. * License: Apache 2.0 * Finetuned from model: mistral-community/Mixtral-8x22B-v0.1 ### Model Sources * Repository: URL * Dataset: URL Performance ----------- Zephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. Intended uses & limitations --------------------------- The model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers: Bias, Risks, and Limitations ---------------------------- Zephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this. Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-06 * train\_batch\_size: 1 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 32 * total\_train\_batch\_size: 32 * total\_eval\_batch\_size: 256 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: inverse\_sqrt * lr\_scheduler\_warmup\_steps: 100 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.1 If you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper: You may also wish to cite the creators of this model:
[ "### Model Description\n\n\n* Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English.\n* License: Apache 2.0\n* Finetuned from model: mistral-community/Mixtral-8x22B-v0.1", "### Model Sources\n\n\n* Repository: URL\n* Dataset: URL\n\n\nPerformance\n-----------\n\n\nZephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nIt is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 32\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1\n\n\nIf you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:\n\n\nYou may also wish to cite the creators of this model:" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #trl #orpo #generated_from_trainer #conversational #dataset-argilla/distilabel-capybara-dpo-7k-binarized #arxiv-2403.07691 #arxiv-2311.07911 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Model Description\n\n\n* Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English.\n* License: Apache 2.0\n* Finetuned from model: mistral-community/Mixtral-8x22B-v0.1", "### Model Sources\n\n\n* Repository: URL\n* Dataset: URL\n\n\nPerformance\n-----------\n\n\nZephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nIt is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 32\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1\n\n\nIf you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:\n\n\nYou may also wish to cite the creators of this model:" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/scribis/Fantastica-7b-Instruct-0.2-Italian_merged <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF/resolve/main/Fantastica-7b-Instruct-0.2-Italian_merged.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["Italian", "Mistral", "finetuning", "Text Generation"], "datasets": ["scribis/Wikipedia_it_Trame_Romanzi", "scribis/Corpus-Frasi-da-Opere-Letterarie", "scribis/Wikipedia-it-Trame-di-Film", "scribis/Wikipedia-it-Descrizioni-di-Dipinti"], "base_model": "scribis/Fantastica-7b-Instruct-0.2-Italian_merged", "quantized_by": "mradermacher"}
mradermacher/Fantastica-7b-Instruct-0.2-Italian_merged-GGUF
null
[ "transformers", "gguf", "Italian", "Mistral", "finetuning", "Text Generation", "en", "dataset:scribis/Wikipedia_it_Trame_Romanzi", "dataset:scribis/Corpus-Frasi-da-Opere-Letterarie", "dataset:scribis/Wikipedia-it-Trame-di-Film", "dataset:scribis/Wikipedia-it-Descrizioni-di-Dipinti", "base_model:scribis/Fantastica-7b-Instruct-0.2-Italian_merged", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T02:20:18+00:00
[]
[ "en" ]
TAGS #transformers #gguf #Italian #Mistral #finetuning #Text Generation #en #dataset-scribis/Wikipedia_it_Trame_Romanzi #dataset-scribis/Corpus-Frasi-da-Opere-Letterarie #dataset-scribis/Wikipedia-it-Trame-di-Film #dataset-scribis/Wikipedia-it-Descrizioni-di-Dipinti #base_model-scribis/Fantastica-7b-Instruct-0.2-Italian_merged #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #Italian #Mistral #finetuning #Text Generation #en #dataset-scribis/Wikipedia_it_Trame_Romanzi #dataset-scribis/Corpus-Frasi-da-Opere-Letterarie #dataset-scribis/Wikipedia-it-Trame-di-Film #dataset-scribis/Wikipedia-it-Descrizioni-di-Dipinti #base_model-scribis/Fantastica-7b-Instruct-0.2-Italian_merged #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
adapter-transformers
# Adapter `BigTMiami/D_adapter_seq_bn_pretraining_P_20` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_split_25M_reviews_20_percent_condensed](https://huggingface.co/datasets/BigTMiami/amazon_split_25M_reviews_20_percent_condensed/) dataset and includes a prediction head for masked lm. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("BigTMiami/D_adapter_seq_bn_pretraining_P_20", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
{"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_split_25M_reviews_20_percent_condensed"]}
BigTMiami/D_adapter_seq_bn_pretraining_P_20
null
[ "adapter-transformers", "roberta", "dataset:BigTMiami/amazon_split_25M_reviews_20_percent_condensed", "region:us" ]
null
2024-04-13T02:20:54+00:00
[]
[]
TAGS #adapter-transformers #roberta #dataset-BigTMiami/amazon_split_25M_reviews_20_percent_condensed #region-us
# Adapter 'BigTMiami/D_adapter_seq_bn_pretraining_P_20' for roberta-base An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm. This adapter was created for usage with the Adapters library. ## Usage First, install 'adapters': Now, the adapter can be loaded and activated like this: ## Architecture & Training ## Evaluation results
[ "# Adapter 'BigTMiami/D_adapter_seq_bn_pretraining_P_20' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ "TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_split_25M_reviews_20_percent_condensed #region-us \n", "# Adapter 'BigTMiami/D_adapter_seq_bn_pretraining_P_20' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-7b-chat-Non-Toxic-143k This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2200 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4400 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.13.3
{"tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "llama-7b-chat-Non-Toxic-143k", "results": []}]}
Niyantha23M/llama-7b-chat-Non-Toxic-143k
null
[ "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-04-13T02:21:19+00:00
[]
[]
TAGS #trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
# llama-7b-chat-Non-Toxic-143k This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2200 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4400 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.13.3
[ "# llama-7b-chat-Non-Toxic-143k\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2200\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4400\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- Transformers 4.33.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.13.3" ]
[ "TAGS\n#trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n", "# llama-7b-chat-Non-Toxic-143k\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2200\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4400\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- Transformers 4.33.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.13.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small En 3 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 3.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3916 - Wer: 361.7761 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5756 | 1.34 | 1000 | 0.2520 | 187.0784 | | 0.3595 | 2.67 | 2000 | 0.2722 | 359.6159 | | 0.3034 | 4.01 | 3000 | 0.3342 | 304.8905 | | 0.1605 | 5.34 | 4000 | 0.3916 | 361.7761 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["hi"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small En 3", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 3.0", "type": "mozilla-foundation/common_voice_11_0", "config": "en", "split": "None", "args": "config: hi, split: test"}, "metrics": [{"type": "wer", "value": 361.776131866536, "name": "Wer"}]}]}]}
glenn2/whisper-small-b1
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-13T02:22:14+00:00
[]
[ "hi" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #hi #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
Whisper Small En 3 ================== This model is a fine-tuned version of openai/whisper-small on the Common Voice 3.0 dataset. It achieves the following results on the evaluation set: * Loss: 0.3916 * Wer: 361.7761 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 4000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #hi #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# c4ai-command-r-plus - EXL2 5.5bpw This is a 5.5bpw EXL2 quant of [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus) Details about the model can be found at the above model page. ## Turbodep EXL2 Quants This repo only has specific quants not already done at [turboderp/command-r-plus-103B-exl2](https://huggingface.co/turboderp/command-r-plus-103B-exl2) Quants marked as turboderp can be downloaded from that repo. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. | Quant Level | Perplexity Score | Repo | |-------------|------------------|------| | 6.0 | 4.7068 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 5.5 | 4.7136 | Dracones | | 5.0 | 4.7309 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.5 | 4.8111 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.25 | 4.8292 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.0 | 4.8603 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.75 | 4.9112 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.5 | 4.9592 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.25 | 5.0631 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.0 | 5.2050 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 2.75 | 5.3820 | Dracones | | 2.5 | 5.6681 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 2.25 | 5.9769 | Dracones | ## EQ Bench Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. | Quant Size | Alpaca | ChatML | Command-R | Command-R-Plus | |------------|--------|--------|--------|--------| | 6.0 | 70.77 | 62.58 | 75.81 | 74.95 | | 5.5 | 71.93 | 67.7 | 74.9 | 75.48 | | 5.0 | 69.51 | 63.94 | 74.92 | 75.28 | _Note:_ EQ Bench scripting not working well, other quants may not be tested. ### Command-R-Plus Template This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS_TOKEN into the starter prompt. _text-generation-webui/instruction-templates/Command-R-Plus.yaml_: ```yaml instruction_template: |- {%- if messages[0]['role'] == 'system' -%} {%- set loop_messages = messages[1:] -%} {%- set system_message = messages[0]['content'] -%} {%- elif false == true -%} {%- set loop_messages = messages -%} {%- set system_message = 'You are Command-R, a brilliant, sophisticated, AI-assistant trained to assist human users by providing thorough responses. You are trained by Cohere.' -%} {%- else -%} {%- set loop_messages = messages -%} {%- set system_message = false -%} {%- endif -%} {%- if system_message != false -%} {{ '<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' + system_message + '<|END_OF_TURN_TOKEN|>' }} {%- endif -%} {%- for message in loop_messages -%} {%- set content = message['content'] -%} {%- if message['role'] == 'user' -%} {{ '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} {%- elif message['role'] == 'assistant' -%} {{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' }} {%- endif -%} ``` ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="c4ai-command-r-plus" BIT_PRECISIONS=(8.0 7.5 7.0 6.5 5.5 2.75 2.25) # MODEL_NAME="turboderp_command-r-plus-103B" # BIT_PRECISIONS=(6.0 5.0 4.5 4.25 4.0 3.75 3.5 3.25 3.0 2.5) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # MODEL_DIR="models/${MODEL_NAME}-exl2_${BIT_PRECISION}bpw" if [ -d "$MODEL_DIR" ]; then output=$(python test_inference.py -m "$MODEL_DIR" -gs 22,24 -ed data/wikitext/wikitext-2-v1.parquet) score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" fi done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="c4ai-command-r-plus" # Define variables MODEL_DIR="models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(5.0) BIT_PRECISIONS=(8.0 7.5 6.5 5.5 2.75 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["exl2"]}
Dracones/c4ai-command-r-plus_exl2_5.5bpw
null
[ "transformers", "safetensors", "cohere", "text-generation", "exl2", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T02:29:49+00:00
[]
[ "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar" ]
TAGS #transformers #safetensors #cohere #text-generation #exl2 #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
c4ai-command-r-plus - EXL2 5.5bpw ================================= This is a 5.5bpw EXL2 quant of CohereForAI/c4ai-command-r-plus Details about the model can be found at the above model page. Turbodep EXL2 Quants -------------------- This repo only has specific quants not already done at turboderp/command-r-plus-103B-exl2 Quants marked as turboderp can be downloaded from that repo. EXL2 Version ------------ These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. Perplexity Scoring ------------------ Below are the perplexity scores for the EXL2 models. A lower score is better. Quant Level: 6.0, Perplexity Score: 4.7068, Repo: turboderp Quant Level: 5.5, Perplexity Score: 4.7136, Repo: Dracones Quant Level: 5.0, Perplexity Score: 4.7309, Repo: turboderp Quant Level: 4.5, Perplexity Score: 4.8111, Repo: turboderp Quant Level: 4.25, Perplexity Score: 4.8292, Repo: turboderp Quant Level: 4.0, Perplexity Score: 4.8603, Repo: turboderp Quant Level: 3.75, Perplexity Score: 4.9112, Repo: turboderp Quant Level: 3.5, Perplexity Score: 4.9592, Repo: turboderp Quant Level: 3.25, Perplexity Score: 5.0631, Repo: turboderp Quant Level: 3.0, Perplexity Score: 5.2050, Repo: turboderp Quant Level: 2.75, Perplexity Score: 5.3820, Repo: Dracones Quant Level: 2.5, Perplexity Score: 5.6681, Repo: turboderp Quant Level: 2.25, Perplexity Score: 5.9769, Repo: Dracones EQ Bench -------- Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. *Note:* EQ Bench scripting not working well, other quants may not be tested. ### Command-R-Plus Template This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\_TOKEN into the starter prompt. *text-generation-webui/instruction-templates/Command-R-Plus.yaml*: ### Perplexity Script This was the script used for perplexity testing. Quant Details ------------- This is the script used for quantization.
[ "### Command-R-Plus Template\n\n\nThis is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\\_TOKEN into the starter prompt.\n\n\n*text-generation-webui/instruction-templates/Command-R-Plus.yaml*:", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #cohere #text-generation #exl2 #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Command-R-Plus Template\n\n\nThis is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\\_TOKEN into the starter prompt.\n\n\n*text-generation-webui/instruction-templates/Command-R-Plus.yaml*:", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
null
adapter-transformers
# Adapter `BigTMiami/E_adapter_seq_bn_inv_pretraining_P_20` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_split_25M_reviews_20_percent_condensed](https://huggingface.co/datasets/BigTMiami/amazon_split_25M_reviews_20_percent_condensed/) dataset and includes a prediction head for masked lm. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("BigTMiami/E_adapter_seq_bn_inv_pretraining_P_20", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
{"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_split_25M_reviews_20_percent_condensed"]}
BigTMiami/E_adapter_seq_bn_inv_pretraining_P_20
null
[ "adapter-transformers", "roberta", "dataset:BigTMiami/amazon_split_25M_reviews_20_percent_condensed", "region:us" ]
null
2024-04-13T02:36:04+00:00
[]
[]
TAGS #adapter-transformers #roberta #dataset-BigTMiami/amazon_split_25M_reviews_20_percent_condensed #region-us
# Adapter 'BigTMiami/E_adapter_seq_bn_inv_pretraining_P_20' for roberta-base An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm. This adapter was created for usage with the Adapters library. ## Usage First, install 'adapters': Now, the adapter can be loaded and activated like this: ## Architecture & Training ## Evaluation results
[ "# Adapter 'BigTMiami/E_adapter_seq_bn_inv_pretraining_P_20' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ "TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_split_25M_reviews_20_percent_condensed #region-us \n", "# Adapter 'BigTMiami/E_adapter_seq_bn_inv_pretraining_P_20' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/NousResearch/Nous-Hermes-Llama2-70b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Nous-Hermes-Llama2-70b-GGUF/resolve/main/Nous-Hermes-Llama2-70b.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": ["mit"], "library_name": "transformers", "tags": ["llama-2", "self-instruct", "distillation", "synthetic instruction"], "base_model": "NousResearch/Nous-Hermes-Llama2-70b", "quantized_by": "mradermacher"}
mradermacher/Nous-Hermes-Llama2-70b-GGUF
null
[ "transformers", "gguf", "llama-2", "self-instruct", "distillation", "synthetic instruction", "en", "base_model:NousResearch/Nous-Hermes-Llama2-70b", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-13T02:36:08+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama-2 #self-instruct #distillation #synthetic instruction #en #base_model-NousResearch/Nous-Hermes-Llama2-70b #license-mit #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #llama-2 #self-instruct #distillation #synthetic instruction #en #base_model-NousResearch/Nous-Hermes-Llama2-70b #license-mit #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Model Card for neoncortex/mini-mistral-openhermes-2.5-chatml-test A tiny Mistral model trained as an experiment on teknium/OpenHermes-2.5. ## Model Details A 63M parameter auto-regressive LM using Mistral architecture as a base. - Multi-query Attention instead of Grouped-query Attention. - Sliding window is disabled. - Modified ChatML instead of Mistral chat template - TL;DR I used '<|im_start|>human' instead of '<|im_start|>user' ### Model Description Just doing it to see what happens. It'll take about 40 to 45 hours to train on two Nvidia RTX 3060 12GB. It uses ChatML for the chat template, but I fucked up the template in the dataset, using '<|im_start|>human' instead of '<|im_start|>user'. ¯\_(ツ)_/¯ So, here's the bits: ``` {%- set ns = namespace(found=false) -%} {%- for message in messages -%} {%- if message['role'] == 'system' -%} {%- set ns.found = true -%} {%- endif -%} {%- endfor -%} {%- for message in messages %} {%- if message['role'] == 'system' -%} {{- '<|im_start|>system\n' + message['content'].rstrip() + '<|im_end|>\n' -}} {%- else -%} {%- if message['role'] == 'human' -%} {{-'<|im_start|>human\n' + message['content'].rstrip() + '<|im_end|>\n'-}} {%- else -%} {{-'<|im_start|>assistant\n' + message['content'] + '<|im_end|>\n' -}} {%- endif -%} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{-'<|im_start|>assistant\n'-}} {%- endif -%} ``` - **Developed by:** gronkomatic - **Funded by:** gronkomatic - **Shared by:** gronkomatic - **Model type:** Mistral - **Language(s) (NLP):** English, maybe others I dunno - **License:** OpenRAIL, IDGAF ### Model Sources Exclusively available right here on HuggingFace! - **Repository:** https://huggingface.co/neoncortex/mini-mistral-openhermes-2.5-chatml-test - **Paper:** LoL - **Demo:** Just download it in Oobabooga and use the modified chatML template above. Maybe I'll throw together a Space or something. ## Uses If you wanna have a laugh at how bad it is then go ahead, but I wouldn't expect much from it. ### Out-of-Scope Use This model won't work well for pretty much everything, probably. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing I took the OpenHermes 2.5 dataset and formatted it with ChatML. #### Training Hyperparameters - **Training regime:** bf16 mixed precision #### Speeds, Sizes, Times epochs: 9 steps: 140976 batches per device: 6 1.04it/s ## Evaluation I tried to run evals but the eval suite just laughed at me. ## Model Examination Don't be rude. ## Environmental Impact - **Hardware Type:** I already told you. Try and keep up. - **Hours used:** ~45 x 2 I guess. - **Cloud Provider:** gronkomatic - **Compute Region:** myob - **Carbon Emitted:** Yes, definitely ### Compute Infrastructure I trained it on my PC with no side on it because I like to watch the GPUs do their work. #### Hardware 2 x Nvidia RTX 3060 12GB #### Software The wonderful free stuff at HuggingFace (https://huggingface.co)[https://huggingface.co]: transformers, datasets, trl ## Model Card Authors gronkomatic, unless you're offended by something, in which case it was hacked by hackers. ## Model Card Contact If you want to send me insults come find me on Reddit I guess u/gronkomatic.
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["teknium/OpenHermes-2.5"], "pipeline_tag": "text-generation"}
neoncortex/mini-mistral-openhermes-2.5-chatml-test
null
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:teknium/OpenHermes-2.5", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2024-04-13T02:41:51+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #en #dataset-teknium/OpenHermes-2.5 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# Model Card for neoncortex/mini-mistral-openhermes-2.5-chatml-test A tiny Mistral model trained as an experiment on teknium/OpenHermes-2.5. ## Model Details A 63M parameter auto-regressive LM using Mistral architecture as a base. - Multi-query Attention instead of Grouped-query Attention. - Sliding window is disabled. - Modified ChatML instead of Mistral chat template - TL;DR I used '<|im_start|>human' instead of '<|im_start|>user' ### Model Description Just doing it to see what happens. It'll take about 40 to 45 hours to train on two Nvidia RTX 3060 12GB. It uses ChatML for the chat template, but I fucked up the template in the dataset, using '<|im_start|>human' instead of '<|im_start|>user'. ¯\_(ツ)_/¯ So, here's the bits: - Developed by: gronkomatic - Funded by: gronkomatic - Shared by: gronkomatic - Model type: Mistral - Language(s) (NLP): English, maybe others I dunno - License: OpenRAIL, IDGAF ### Model Sources Exclusively available right here on HuggingFace! - Repository: URL - Paper: LoL - Demo: Just download it in Oobabooga and use the modified chatML template above. Maybe I'll throw together a Space or something. ## Uses If you wanna have a laugh at how bad it is then go ahead, but I wouldn't expect much from it. ### Out-of-Scope Use This model won't work well for pretty much everything, probably. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing I took the OpenHermes 2.5 dataset and formatted it with ChatML. #### Training Hyperparameters - Training regime: bf16 mixed precision #### Speeds, Sizes, Times epochs: 9 steps: 140976 batches per device: 6 1.04it/s ## Evaluation I tried to run evals but the eval suite just laughed at me. ## Model Examination Don't be rude. ## Environmental Impact - Hardware Type: I already told you. Try and keep up. - Hours used: ~45 x 2 I guess. - Cloud Provider: gronkomatic - Compute Region: myob - Carbon Emitted: Yes, definitely ### Compute Infrastructure I trained it on my PC with no side on it because I like to watch the GPUs do their work. #### Hardware 2 x Nvidia RTX 3060 12GB #### Software The wonderful free stuff at HuggingFace (URL)[URL]: transformers, datasets, trl ## Model Card Authors gronkomatic, unless you're offended by something, in which case it was hacked by hackers. ## Model Card Contact If you want to send me insults come find me on Reddit I guess u/gronkomatic.
[ "# Model Card for neoncortex/mini-mistral-openhermes-2.5-chatml-test\n\nA tiny Mistral model trained as an experiment on teknium/OpenHermes-2.5.", "## Model Details\n\nA 63M parameter auto-regressive LM using Mistral architecture as a base.\n- Multi-query Attention instead of Grouped-query Attention.\n- Sliding window is disabled.\n- Modified ChatML instead of Mistral chat template - TL;DR I used '<|im_start|>human' instead of '<|im_start|>user'", "### Model Description\n\nJust doing it to see what happens.\n\nIt'll take about 40 to 45 hours to train on two Nvidia RTX 3060 12GB.\n\nIt uses ChatML for the chat template, but I fucked up the template in the dataset,\nusing '<|im_start|>human' instead of '<|im_start|>user'. ¯\\_(ツ)_/¯\nSo, here's the bits:\n\n\n\n- Developed by: gronkomatic\n- Funded by: gronkomatic\n- Shared by: gronkomatic\n- Model type: Mistral\n- Language(s) (NLP): English, maybe others I dunno\n- License: OpenRAIL, IDGAF", "### Model Sources\n\nExclusively available right here on HuggingFace!\n\n- Repository: URL\n- Paper: LoL\n- Demo: Just download it in Oobabooga and use the modified chatML template above. Maybe I'll throw together a Space or something.", "## Uses\n\nIf you wanna have a laugh at how bad it is then go ahead, but I wouldn't expect much from it.", "### Out-of-Scope Use\n\nThis model won't work well for pretty much everything, probably.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing\n\nI took the OpenHermes 2.5 dataset and formatted it with ChatML.", "#### Training Hyperparameters\n\n- Training regime: bf16 mixed precision", "#### Speeds, Sizes, Times\n\nepochs: 9\nsteps: 140976\nbatches per device: 6\n1.04it/s", "## Evaluation\n\nI tried to run evals but the eval suite just laughed at me.", "## Model Examination\n\nDon't be rude.", "## Environmental Impact\n\n- Hardware Type: I already told you. Try and keep up.\n- Hours used: ~45 x 2 I guess.\n- Cloud Provider: gronkomatic\n- Compute Region: myob\n- Carbon Emitted: Yes, definitely", "### Compute Infrastructure\n\nI trained it on my PC with no side on it because I like to watch the GPUs do their work.", "#### Hardware\n\n2 x Nvidia RTX 3060 12GB", "#### Software\n\nThe wonderful free stuff at HuggingFace (URL)[URL]: transformers, datasets, trl", "## Model Card Authors\n\ngronkomatic, unless you're offended by something, in which case it was hacked by hackers.", "## Model Card Contact\n\nIf you want to send me insults come find me on Reddit I guess u/gronkomatic." ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #en #dataset-teknium/OpenHermes-2.5 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# Model Card for neoncortex/mini-mistral-openhermes-2.5-chatml-test\n\nA tiny Mistral model trained as an experiment on teknium/OpenHermes-2.5.", "## Model Details\n\nA 63M parameter auto-regressive LM using Mistral architecture as a base.\n- Multi-query Attention instead of Grouped-query Attention.\n- Sliding window is disabled.\n- Modified ChatML instead of Mistral chat template - TL;DR I used '<|im_start|>human' instead of '<|im_start|>user'", "### Model Description\n\nJust doing it to see what happens.\n\nIt'll take about 40 to 45 hours to train on two Nvidia RTX 3060 12GB.\n\nIt uses ChatML for the chat template, but I fucked up the template in the dataset,\nusing '<|im_start|>human' instead of '<|im_start|>user'. ¯\\_(ツ)_/¯\nSo, here's the bits:\n\n\n\n- Developed by: gronkomatic\n- Funded by: gronkomatic\n- Shared by: gronkomatic\n- Model type: Mistral\n- Language(s) (NLP): English, maybe others I dunno\n- License: OpenRAIL, IDGAF", "### Model Sources\n\nExclusively available right here on HuggingFace!\n\n- Repository: URL\n- Paper: LoL\n- Demo: Just download it in Oobabooga and use the modified chatML template above. Maybe I'll throw together a Space or something.", "## Uses\n\nIf you wanna have a laugh at how bad it is then go ahead, but I wouldn't expect much from it.", "### Out-of-Scope Use\n\nThis model won't work well for pretty much everything, probably.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing\n\nI took the OpenHermes 2.5 dataset and formatted it with ChatML.", "#### Training Hyperparameters\n\n- Training regime: bf16 mixed precision", "#### Speeds, Sizes, Times\n\nepochs: 9\nsteps: 140976\nbatches per device: 6\n1.04it/s", "## Evaluation\n\nI tried to run evals but the eval suite just laughed at me.", "## Model Examination\n\nDon't be rude.", "## Environmental Impact\n\n- Hardware Type: I already told you. Try and keep up.\n- Hours used: ~45 x 2 I guess.\n- Cloud Provider: gronkomatic\n- Compute Region: myob\n- Carbon Emitted: Yes, definitely", "### Compute Infrastructure\n\nI trained it on my PC with no side on it because I like to watch the GPUs do their work.", "#### Hardware\n\n2 x Nvidia RTX 3060 12GB", "#### Software\n\nThe wonderful free stuff at HuggingFace (URL)[URL]: transformers, datasets, trl", "## Model Card Authors\n\ngronkomatic, unless you're offended by something, in which case it was hacked by hackers.", "## Model Card Contact\n\nIf you want to send me insults come find me on Reddit I guess u/gronkomatic." ]
text-generation
transformers
# c4ai-command-r-plus - EXL2 2.75bpw This is a 2.75bpw EXL2 quant of [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus) Details about the model can be found at the above model page. ## Turbodep EXL2 Quants This repo only has specific quants not already done at [turboderp/command-r-plus-103B-exl2](https://huggingface.co/turboderp/command-r-plus-103B-exl2) Quants marked as turboderp can be downloaded from that repo. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. | Quant Level | Perplexity Score | Repo | |-------------|------------------|------| | 6.0 | 4.7068 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 5.5 | 4.7136 | Dracones | | 5.0 | 4.7309 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.5 | 4.8111 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.25 | 4.8292 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.0 | 4.8603 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.75 | 4.9112 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.5 | 4.9592 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.25 | 5.0631 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.0 | 5.2050 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 2.75 | 5.3820 | Dracones | | 2.5 | 5.6681 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 2.25 | 5.9769 | Dracones | ## EQ Bench Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. | Quant Size | Alpaca | ChatML | Command-R | Command-R-Plus | |------------|--------|--------|--------|--------| | 6.0 | 70.77 | 62.58 | 75.81 | 74.95 | | 5.5 | 71.93 | 67.7 | 74.9 | 75.48 | | 5.0 | 69.51 | 63.94 | 74.92 | 75.28 | _Note:_ EQ Bench scripting not working well, other quants may not be tested. ### Command-R-Plus Template This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS_TOKEN into the starter prompt. _text-generation-webui/instruction-templates/Command-R-Plus.yaml_: ```yaml instruction_template: |- {%- if messages[0]['role'] == 'system' -%} {%- set loop_messages = messages[1:] -%} {%- set system_message = messages[0]['content'] -%} {%- elif false == true -%} {%- set loop_messages = messages -%} {%- set system_message = 'You are Command-R, a brilliant, sophisticated, AI-assistant trained to assist human users by providing thorough responses. You are trained by Cohere.' -%} {%- else -%} {%- set loop_messages = messages -%} {%- set system_message = false -%} {%- endif -%} {%- if system_message != false -%} {{ '<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' + system_message + '<|END_OF_TURN_TOKEN|>' }} {%- endif -%} {%- for message in loop_messages -%} {%- set content = message['content'] -%} {%- if message['role'] == 'user' -%} {{ '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} {%- elif message['role'] == 'assistant' -%} {{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' }} {%- endif -%} ``` ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="c4ai-command-r-plus" BIT_PRECISIONS=(8.0 7.5 7.0 6.5 5.5 2.75 2.25) # MODEL_NAME="turboderp_command-r-plus-103B" # BIT_PRECISIONS=(6.0 5.0 4.5 4.25 4.0 3.75 3.5 3.25 3.0 2.5) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # MODEL_DIR="models/${MODEL_NAME}-exl2_${BIT_PRECISION}bpw" if [ -d "$MODEL_DIR" ]; then output=$(python test_inference.py -m "$MODEL_DIR" -gs 22,24 -ed data/wikitext/wikitext-2-v1.parquet) score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" fi done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="c4ai-command-r-plus" # Define variables MODEL_DIR="models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(5.0) BIT_PRECISIONS=(8.0 7.5 6.5 5.5 2.75 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["exl2"]}
Dracones/c4ai-command-r-plus_exl2_2.75bpw
null
[ "transformers", "safetensors", "cohere", "text-generation", "exl2", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T02:42:57+00:00
[]
[ "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar" ]
TAGS #transformers #safetensors #cohere #text-generation #exl2 #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
c4ai-command-r-plus - EXL2 2.75bpw ================================== This is a 2.75bpw EXL2 quant of CohereForAI/c4ai-command-r-plus Details about the model can be found at the above model page. Turbodep EXL2 Quants -------------------- This repo only has specific quants not already done at turboderp/command-r-plus-103B-exl2 Quants marked as turboderp can be downloaded from that repo. EXL2 Version ------------ These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. Perplexity Scoring ------------------ Below are the perplexity scores for the EXL2 models. A lower score is better. Quant Level: 6.0, Perplexity Score: 4.7068, Repo: turboderp Quant Level: 5.5, Perplexity Score: 4.7136, Repo: Dracones Quant Level: 5.0, Perplexity Score: 4.7309, Repo: turboderp Quant Level: 4.5, Perplexity Score: 4.8111, Repo: turboderp Quant Level: 4.25, Perplexity Score: 4.8292, Repo: turboderp Quant Level: 4.0, Perplexity Score: 4.8603, Repo: turboderp Quant Level: 3.75, Perplexity Score: 4.9112, Repo: turboderp Quant Level: 3.5, Perplexity Score: 4.9592, Repo: turboderp Quant Level: 3.25, Perplexity Score: 5.0631, Repo: turboderp Quant Level: 3.0, Perplexity Score: 5.2050, Repo: turboderp Quant Level: 2.75, Perplexity Score: 5.3820, Repo: Dracones Quant Level: 2.5, Perplexity Score: 5.6681, Repo: turboderp Quant Level: 2.25, Perplexity Score: 5.9769, Repo: Dracones EQ Bench -------- Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. *Note:* EQ Bench scripting not working well, other quants may not be tested. ### Command-R-Plus Template This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\_TOKEN into the starter prompt. *text-generation-webui/instruction-templates/Command-R-Plus.yaml*: ### Perplexity Script This was the script used for perplexity testing. Quant Details ------------- This is the script used for quantization.
[ "### Command-R-Plus Template\n\n\nThis is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\\_TOKEN into the starter prompt.\n\n\n*text-generation-webui/instruction-templates/Command-R-Plus.yaml*:", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #cohere #text-generation #exl2 #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Command-R-Plus Template\n\n\nThis is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\\_TOKEN into the starter prompt.\n\n\n*text-generation-webui/instruction-templates/Command-R-Plus.yaml*:", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vecvanilla_ctc_zero_infinity_longertrain This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1396 - Wer: 0.2973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.4721 | 0.43 | 100 | 1.0565 | 0.4014 | | 1.2574 | 0.85 | 200 | 0.9707 | 0.3704 | | 1.1397 | 1.28 | 300 | 0.9644 | 0.3609 | | 1.0939 | 1.71 | 400 | 0.9610 | 0.3637 | | 1.0874 | 2.14 | 500 | 0.9508 | 0.3581 | | 1.0573 | 2.56 | 600 | 0.8865 | 0.3518 | | 1.0386 | 2.99 | 700 | 1.0304 | 0.3493 | | 0.9792 | 3.42 | 800 | 0.8235 | 0.3523 | | 0.9789 | 3.85 | 900 | 0.8404 | 0.3388 | | 0.9095 | 4.27 | 1000 | 1.0925 | 0.3588 | | 0.8947 | 4.7 | 1100 | 1.0126 | 0.3357 | | 0.8571 | 5.13 | 1200 | 1.1404 | 0.3550 | | 0.8276 | 5.56 | 1300 | 0.8135 | 0.3294 | | 0.8631 | 5.98 | 1400 | 0.8342 | 0.3279 | | 0.8134 | 6.41 | 1500 | 0.8524 | 0.3177 | | 0.8027 | 6.84 | 1600 | 0.8182 | 0.3207 | | 0.7556 | 7.26 | 1700 | 0.8445 | 0.3185 | | 0.737 | 7.69 | 1800 | 0.8919 | 0.3197 | | 0.7398 | 8.12 | 1900 | 0.8115 | 0.3167 | | 0.7069 | 8.55 | 2000 | 0.8346 | 0.3174 | | 0.7206 | 8.97 | 2100 | 0.9714 | 0.3147 | | 0.6946 | 9.4 | 2200 | 0.8138 | 0.3124 | | 0.6752 | 9.83 | 2300 | 0.8366 | 0.3086 | | 0.7256 | 10.26 | 2400 | 0.8482 | 0.3044 | | 0.7063 | 10.68 | 2500 | 0.8997 | 0.3041 | | 0.6399 | 11.11 | 2600 | 0.8614 | 0.3045 | | 0.6268 | 11.54 | 2700 | 0.8564 | 0.3018 | | 0.6665 | 11.97 | 2800 | 0.8531 | 0.3006 | | 0.622 | 12.39 | 2900 | 0.8759 | 0.3007 | | 0.6568 | 12.82 | 3000 | 1.3093 | 0.3023 | | 0.6296 | 13.25 | 3100 | 1.1312 | 0.3002 | | 0.6448 | 13.68 | 3200 | 1.1779 | 0.2994 | | 0.6188 | 14.1 | 3300 | 1.1203 | 0.2989 | | 0.6216 | 14.53 | 3400 | 1.1421 | 0.2978 | | 0.6238 | 14.96 | 3500 | 1.1396 | 0.2973 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-base-960h", "model-index": [{"name": "wav2vecvanilla_ctc_zero_infinity_longertrain", "results": []}]}
charris/wav2vecvanilla_ctc_zero_infinity_longertrain
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-base-960h", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T02:47:22+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-base-960h #license-apache-2.0 #endpoints_compatible #region-us
wav2vecvanilla\_ctc\_zero\_infinity\_longertrain ================================================ This model is a fine-tuned version of facebook/wav2vec2-base-960h on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.1396 * Wer: 0.2973 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 15 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 15", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-base-960h #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 15", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/TheHappyDrone/Uoxudo_V2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Uoxudo_V2-GGUF/resolve/main/Uoxudo_V2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "TheHappyDrone/Uoxudo_V2", "quantized_by": "mradermacher"}
mradermacher/Uoxudo_V2-GGUF
null
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "sft", "en", "base_model:TheHappyDrone/Uoxudo_V2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T02:48:10+00:00
[]
[ "en" ]
TAGS #transformers #gguf #text-generation-inference #unsloth #mistral #trl #sft #en #base_model-TheHappyDrone/Uoxudo_V2 #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #text-generation-inference #unsloth #mistral #trl #sft #en #base_model-TheHappyDrone/Uoxudo_V2 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
adapter-transformers
# Adapter `BigTMiami/D_adapter_seq_bn_classification_C_30` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("BigTMiami/D_adapter_seq_bn_classification_C_30", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
{"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_helpfulness"]}
BigTMiami/D_adapter_seq_bn_classification_C_30
null
[ "adapter-transformers", "roberta", "dataset:BigTMiami/amazon_helpfulness", "region:us" ]
null
2024-04-13T02:50:01+00:00
[]
[]
TAGS #adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
# Adapter 'BigTMiami/D_adapter_seq_bn_classification_C_30' for roberta-base An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification. This adapter was created for usage with the Adapters library. ## Usage First, install 'adapters': Now, the adapter can be loaded and activated like this: ## Architecture & Training ## Evaluation results
[ "# Adapter 'BigTMiami/D_adapter_seq_bn_classification_C_30' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ "TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n", "# Adapter 'BigTMiami/D_adapter_seq_bn_classification_C_30' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
text-generation
transformers
# c4ai-command-r-plus - EXL2 2.25bpw This is a 2.25bpw EXL2 quant of [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus) Details about the model can be found at the above model page. ## Turbodep EXL2 Quants This repo only has specific quants not already done at [turboderp/command-r-plus-103B-exl2](https://huggingface.co/turboderp/command-r-plus-103B-exl2) Quants marked as turboderp can be downloaded from that repo. ## EXL2 Version These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. ## Perplexity Scoring Below are the perplexity scores for the EXL2 models. A lower score is better. | Quant Level | Perplexity Score | Repo | |-------------|------------------|------| | 6.0 | 4.7068 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 5.5 | 4.7136 | Dracones | | 5.0 | 4.7309 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.5 | 4.8111 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.25 | 4.8292 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 4.0 | 4.8603 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.75 | 4.9112 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.5 | 4.9592 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.25 | 5.0631 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 3.0 | 5.2050 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 2.75 | 5.3820 | Dracones | | 2.5 | 5.6681 | [turboderp](https://huggingface.co/turboderp/command-r-plus-103B-exl2) | | 2.25 | 5.9769 | Dracones | ## EQ Bench Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. | Quant Size | Alpaca | ChatML | Command-R | Command-R-Plus | |------------|--------|--------|--------|--------| | 6.0 | 70.77 | 62.58 | 75.81 | 74.95 | | 5.5 | 71.93 | 67.7 | 74.9 | 75.48 | | 5.0 | 69.51 | 63.94 | 74.92 | 75.28 | _Note:_ EQ Bench scripting not working well, other quants may not be tested. ### Command-R-Plus Template This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS_TOKEN into the starter prompt. _text-generation-webui/instruction-templates/Command-R-Plus.yaml_: ```yaml instruction_template: |- {%- if messages[0]['role'] == 'system' -%} {%- set loop_messages = messages[1:] -%} {%- set system_message = messages[0]['content'] -%} {%- elif false == true -%} {%- set loop_messages = messages -%} {%- set system_message = 'You are Command-R, a brilliant, sophisticated, AI-assistant trained to assist human users by providing thorough responses. You are trained by Cohere.' -%} {%- else -%} {%- set loop_messages = messages -%} {%- set system_message = false -%} {%- endif -%} {%- if system_message != false -%} {{ '<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' + system_message + '<|END_OF_TURN_TOKEN|>' }} {%- endif -%} {%- for message in loop_messages -%} {%- set content = message['content'] -%} {%- if message['role'] == 'user' -%} {{ '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} {%- elif message['role'] == 'assistant' -%} {{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' }} {%- endif -%} ``` ### Perplexity Script This was the script used for perplexity testing. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="c4ai-command-r-plus" BIT_PRECISIONS=(8.0 7.5 7.0 6.5 5.5 2.75 2.25) # MODEL_NAME="turboderp_command-r-plus-103B" # BIT_PRECISIONS=(6.0 5.0 4.5 4.25 4.0 3.75 3.5 3.25 3.0 2.5) # Print the markdown table header echo "| Quant Level | Perplexity Score |" echo "|-------------|------------------|" for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # MODEL_DIR="models/${MODEL_NAME}-exl2_${BIT_PRECISION}bpw" if [ -d "$MODEL_DIR" ]; then output=$(python test_inference.py -m "$MODEL_DIR" -gs 22,24 -ed data/wikitext/wikitext-2-v1.parquet) score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') echo "| $BIT_PRECISION | $score |" fi done ``` ## Quant Details This is the script used for quantization. ```bash #!/bin/bash # Activate the conda environment source ~/miniconda3/etc/profile.d/conda.sh conda activate exllamav2 # Set the model name and bit size MODEL_NAME="c4ai-command-r-plus" # Define variables MODEL_DIR="models/$MODEL_NAME" OUTPUT_DIR="exl2_$MODEL_NAME" MEASUREMENT_FILE="measurements/$MODEL_NAME.json" # Create the measurement file if needed if [ ! -f "$MEASUREMENT_FILE" ]; then echo "Creating $MEASUREMENT_FILE" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE fi # Choose one of the below. Either create a single quant for testing or a batch of them. # BIT_PRECISIONS=(5.0) BIT_PRECISIONS=(8.0 7.5 6.5 5.5 2.75 2.25) for BIT_PRECISION in "${BIT_PRECISIONS[@]}" do CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" # If it doesn't already exist, make the quant if [ ! -d "$CONVERTED_FOLDER" ]; then echo "Creating $CONVERTED_FOLDER" # Create directories if [ -d "$OUTPUT_DIR" ]; then rm -r "$OUTPUT_DIR" fi mkdir "$OUTPUT_DIR" mkdir "$CONVERTED_FOLDER" # Run conversion commands python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER fi done ```
{"language": ["en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["exl2"]}
Dracones/c4ai-command-r-plus_exl2_2.25bpw
null
[ "transformers", "safetensors", "cohere", "text-generation", "exl2", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T02:50:01+00:00
[]
[ "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar" ]
TAGS #transformers #safetensors #cohere #text-generation #exl2 #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
c4ai-command-r-plus - EXL2 2.25bpw ================================== This is a 2.25bpw EXL2 quant of CohereForAI/c4ai-command-r-plus Details about the model can be found at the above model page. Turbodep EXL2 Quants -------------------- This repo only has specific quants not already done at turboderp/command-r-plus-103B-exl2 Quants marked as turboderp can be downloaded from that repo. EXL2 Version ------------ These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. If you have problems loading these models, please update Text Generation WebUI to the latest version. Perplexity Scoring ------------------ Below are the perplexity scores for the EXL2 models. A lower score is better. Quant Level: 6.0, Perplexity Score: 4.7068, Repo: turboderp Quant Level: 5.5, Perplexity Score: 4.7136, Repo: Dracones Quant Level: 5.0, Perplexity Score: 4.7309, Repo: turboderp Quant Level: 4.5, Perplexity Score: 4.8111, Repo: turboderp Quant Level: 4.25, Perplexity Score: 4.8292, Repo: turboderp Quant Level: 4.0, Perplexity Score: 4.8603, Repo: turboderp Quant Level: 3.75, Perplexity Score: 4.9112, Repo: turboderp Quant Level: 3.5, Perplexity Score: 4.9592, Repo: turboderp Quant Level: 3.25, Perplexity Score: 5.0631, Repo: turboderp Quant Level: 3.0, Perplexity Score: 5.2050, Repo: turboderp Quant Level: 2.75, Perplexity Score: 5.3820, Repo: Dracones Quant Level: 2.5, Perplexity Score: 5.6681, Repo: turboderp Quant Level: 2.25, Perplexity Score: 5.9769, Repo: Dracones EQ Bench -------- Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. *Note:* EQ Bench scripting not working well, other quants may not be tested. ### Command-R-Plus Template This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\_TOKEN into the starter prompt. *text-generation-webui/instruction-templates/Command-R-Plus.yaml*: ### Perplexity Script This was the script used for perplexity testing. Quant Details ------------- This is the script used for quantization.
[ "### Command-R-Plus Template\n\n\nThis is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\\_TOKEN into the starter prompt.\n\n\n*text-generation-webui/instruction-templates/Command-R-Plus.yaml*:", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
[ "TAGS\n#transformers #safetensors #cohere #text-generation #exl2 #en #fr #de #es #it #pt #ja #ko #zh #ar #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Command-R-Plus Template\n\n\nThis is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS\\_TOKEN into the starter prompt.\n\n\n*text-generation-webui/instruction-templates/Command-R-Plus.yaml*:", "### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization." ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amazon_helpfulness_classification_roberta This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3306 - Accuracy: 0.8703 - F1 Macro: 0.6443 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.3164 | 1.0 | 7204 | 0.3329 | 0.8724 | 0.6582 | | 0.2762 | 2.0 | 14408 | 0.3466 | 0.8744 | 0.6596 | | 0.2622 | 3.0 | 21612 | 0.3613 | 0.872 | 0.6710 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "roberta-base", "model-index": [{"name": "amazon_helpfulness_classification_roberta", "results": []}]}
ltuzova/amazon_helpfulness_classification_roberta
null
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T02:54:31+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
amazon\_helpfulness\_classification\_roberta ============================================ This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.3306 * Accuracy: 0.8703 * F1 Macro: 0.6443 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.06 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# EasyContext <p align="center"> <img src="https://github.com/jzhang38/EasyContext/raw/main/data/Logo.webp" width="500"> </p> <p align="center"> <a href="https://github.com/jzhang38/EasyContext" target="_blank">GitHub Repo</a> </p> Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware. **This is a context-extrapolated base model.** It has not been instruct-finetuned. This model is finetuned from h2oai/h2o-danube2-1.8b-base with EasyContext on context length 256K. Note that I keep max_position_embeddings in config.json to 4096 because HF llama will create 2D causal mask during initialization. If it is set to 256K GPU will just OOM. You can surely use this model with context length longer than 4096. <p align="center"> <img src="./heatmap.png" width="800"> </p>
{}
PY007/EasyContext-256K-danube2-1.8b
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T02:57:50+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# EasyContext <p align="center"> <img src="URL width="500"> </p> <p align="center"> <a href="URL target="_blank">GitHub Repo</a> </p> Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware. This is a context-extrapolated base model. It has not been instruct-finetuned. This model is finetuned from h2oai/h2o-danube2-1.8b-base with EasyContext on context length 256K. Note that I keep max_position_embeddings in URL to 4096 because HF llama will create 2D causal mask during initialization. If it is set to 256K GPU will just OOM. You can surely use this model with context length longer than 4096. <p align="center"> <img src="./URL" width="800"> </p>
[ "# EasyContext\n\n\n<p align=\"center\">\n <img src=\"URL width=\"500\">\n</p>\n\n<p align=\"center\">\n <a href=\"URL target=\"_blank\">GitHub Repo</a>\n</p>\n\nMemory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.\n\nThis is a context-extrapolated base model. It has not been instruct-finetuned.\n\nThis model is finetuned from h2oai/h2o-danube2-1.8b-base with EasyContext on context length 256K. Note that I keep max_position_embeddings in URL to 4096 because HF llama will create 2D causal mask during initialization. If it is set to 256K GPU will just OOM. You can surely use this model with context length longer than 4096.\n\n<p align=\"center\">\n <img src=\"./URL\" width=\"800\">\n</p>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# EasyContext\n\n\n<p align=\"center\">\n <img src=\"URL width=\"500\">\n</p>\n\n<p align=\"center\">\n <a href=\"URL target=\"_blank\">GitHub Repo</a>\n</p>\n\nMemory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.\n\nThis is a context-extrapolated base model. It has not been instruct-finetuned.\n\nThis model is finetuned from h2oai/h2o-danube2-1.8b-base with EasyContext on context length 256K. Note that I keep max_position_embeddings in URL to 4096 because HF llama will create 2D causal mask during initialization. If it is set to 256K GPU will just OOM. You can surely use this model with context length longer than 4096.\n\n<p align=\"center\">\n <img src=\"./URL\" width=\"800\">\n</p>" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
domenicrosati/adversarial_loss_lr_1e-5_defence_steps_10000_model_meta-llama_Llama-2-7b-chat-hf_batch_4_epoch_4
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T02:57:51+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "252.67 +/- 15.69", "name": "mean_reward", "verified": false}]}]}]}
hafeezrai/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-13T03:01:45+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
null
adapter-transformers
# Adapter `BigTMiami/E_adapter_seq_bn_inv_classification_C_30` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("BigTMiami/E_adapter_seq_bn_inv_classification_C_30", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
{"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_helpfulness"]}
BigTMiami/E_adapter_seq_bn_inv_classification_C_30
null
[ "adapter-transformers", "roberta", "dataset:BigTMiami/amazon_helpfulness", "region:us" ]
null
2024-04-13T03:06:31+00:00
[]
[]
TAGS #adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
# Adapter 'BigTMiami/E_adapter_seq_bn_inv_classification_C_30' for roberta-base An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification. This adapter was created for usage with the Adapters library. ## Usage First, install 'adapters': Now, the adapter can be loaded and activated like this: ## Architecture & Training ## Evaluation results
[ "# Adapter 'BigTMiami/E_adapter_seq_bn_inv_classification_C_30' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ "TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n", "# Adapter 'BigTMiami/E_adapter_seq_bn_inv_classification_C_30' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/maywell/PiVoT-SUS-RP <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/PiVoT-SUS-RP-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q2_K.gguf) | Q2_K | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.IQ3_XS.gguf) | IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q3_K_S.gguf) | Q3_K_S | 15.1 | | | [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.IQ3_M.gguf) | IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q3_K_L.gguf) | Q3_K_L | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.IQ4_XS.gguf) | IQ4_XS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q5_K_S.gguf) | Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q5_K_M.gguf) | Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q6_K.gguf) | Q6_K | 28.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/PiVoT-SUS-RP-GGUF/resolve/main/PiVoT-SUS-RP.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "maywell/PiVoT-SUS-RP", "quantized_by": "mradermacher"}
mradermacher/PiVoT-SUS-RP-GGUF
null
[ "transformers", "gguf", "en", "base_model:maywell/PiVoT-SUS-RP", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T03:07:42+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-maywell/PiVoT-SUS-RP #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-maywell/PiVoT-SUS-RP #license-apache-2.0 #endpoints_compatible #region-us \n" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HamSpamBERT This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on [Spam-Ham](https://huggingface.co/datasets/SalehAhmad/Spam-Ham) dataset. It achieves the following results on the evaluation set: - Loss: 0.0072 - Accuracy: 0.9991 - Precision: 1.0 - Recall: 0.9933 - F1: 0.9966 ```python from transformers import pipeline, BertTokenizer, BertForSequenceClassification tokenizer = BertTokenizer.from_pretrained("udit-k/HamSpamBERT") model = BertForSequenceClassification.from_pretrained("udit-k/HamSpamBERT") classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) print(classifier("Call this number to win FREE IPL FINAL tickets!!!")) print(classifier("Call me when you reach home :)")) ``` ``` [{'label': 'LABEL_1', 'score': 0.9999189376831055}] [{'label': 'LABEL_0', 'score': 0.9999370574951172}] ``` ## Model description This model is a fine-tuned version of the [BERT](https://huggingface.co/bert-base-uncased) model on [Spam-Ham](https://huggingface.co/datasets/SalehAhmad/Spam-Ham) dataset to improve the performance of sentiment analysis on Spam Detection tasks. - LABEL_0 = Ham (Not spam) - LABEL_1 = Spam ## Intended uses & limitations This model can be used to detect spam texts. The primary limitation of this model is that it was trained on a corpus of about 4700 rows and evaluated on around 1200 rows. ## Training and evaluation data - Training corpus = 80% - Evaluation corpus = 20% ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 279 | 0.0492 | 0.9901 | 1.0 | 0.9262 | 0.9617 | | 0.0635 | 2.0 | 558 | 0.0117 | 0.9982 | 1.0 | 0.9866 | 0.9932 | | 0.0635 | 3.0 | 837 | 0.0120 | 0.9982 | 0.9933 | 0.9933 | 0.9933 | | 0.0138 | 4.0 | 1116 | 0.0072 | 0.9991 | 1.0 | 0.9933 | 0.9966 | | 0.0138 | 5.0 | 1395 | 0.0086 | 0.9982 | 0.9933 | 0.9933 | 0.9933 | | 0.0007 | 6.0 | 1674 | 0.0090 | 0.9982 | 0.9933 | 0.9933 | 0.9933 | | 0.0007 | 7.0 | 1953 | 0.0091 | 0.9982 | 0.9933 | 0.9933 | 0.9933 | ### Framework versions - Transformers 4.30.0 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall", "f1"], "widget": [{"text": "Ok i am on the way to home bye", "example_title": "Ham"}, {"text": "PRIVATE! Your 2004 Account Statement for 07742676969 shows 786 unredeemed Bonus Points. To claim call 08719180248 Identifier Code: 45239 Expires", "example_title": "Spam"}], "model-index": [{"name": "HamSpamBERT", "results": []}]}
udit-k/HamSpamBERT
null
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T03:08:52+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
HamSpamBERT =========== This model is a fine-tuned version of bert-base-uncased on Spam-Ham dataset. It achieves the following results on the evaluation set: * Loss: 0.0072 * Accuracy: 0.9991 * Precision: 1.0 * Recall: 0.9933 * F1: 0.9966 Model description ----------------- This model is a fine-tuned version of the BERT model on Spam-Ham dataset to improve the performance of sentiment analysis on Spam Detection tasks. * LABEL\_0 = Ham (Not spam) * LABEL\_1 = Spam Intended uses & limitations --------------------------- This model can be used to detect spam texts. The primary limitation of this model is that it was trained on a corpus of about 4700 rows and evaluated on around 1200 rows. Training and evaluation data ---------------------------- * Training corpus = 80% * Evaluation corpus = 20% ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 7 ### Training results ### Framework versions * Transformers 4.30.0 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.13.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 7", "### Training results", "### Framework versions\n\n\n* Transformers 4.30.0\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.13.3" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 7", "### Training results", "### Framework versions\n\n\n* Transformers 4.30.0\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.13.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0909 - Precision: 0.6831 - Recall: 0.7942 - F1: 0.7344 - Accuracy: 0.9787 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1074 | 1.0 | 679 | 0.0666 | 0.6112 | 0.7891 | 0.6889 | 0.9764 | | 0.0483 | 2.0 | 1358 | 0.0678 | 0.6894 | 0.8094 | 0.7446 | 0.9793 | | 0.0136 | 3.0 | 2037 | 0.0909 | 0.6831 | 0.7942 | 0.7344 | 0.9787 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "microsoft/biogpt", "model-index": [{"name": "bert-finetuned-ner", "results": []}]}
Kevin201217/bert-finetuned-ner
null
[ "transformers", "tensorboard", "safetensors", "biogpt", "token-classification", "generated_from_trainer", "base_model:microsoft/biogpt", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T03:15:09+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #biogpt #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #region-us
bert-finetuned-ner ================== This model is a fine-tuned version of microsoft/biogpt on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.0909 * Precision: 0.6831 * Recall: 0.7942 * F1: 0.7344 * Accuracy: 0.9787 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #biogpt #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codebert_5 This model is a fine-tuned version of [microsoft/codebert-base-mlm](https://huggingface.co/microsoft/codebert-base-mlm) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5732 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.782 | 1.0 | 524 | 0.6491 | | 0.7042 | 2.0 | 1048 | 0.6141 | | 0.6636 | 3.0 | 1572 | 0.5842 | | 0.6579 | 4.0 | 2096 | 0.5818 | | 0.6318 | 5.0 | 2620 | 0.5732 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "microsoft/codebert-base-mlm", "model-index": [{"name": "codebert_5", "results": []}]}
ZZZZCCCC/codebert_5
null
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:microsoft/codebert-base-mlm", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-13T03:15:13+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-microsoft/codebert-base-mlm #autotrain_compatible #endpoints_compatible #region-us
codebert\_5 =========== This model is a fine-tuned version of microsoft/codebert-base-mlm on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.5732 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 12 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-microsoft/codebert-base-mlm #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Tokenizers 0.15.2" ]
null
adapter-transformers
# Adapter `BigTMiami/D_adapter_seq_bn_classification_P_20_to_C_30` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("BigTMiami/D_adapter_seq_bn_classification_P_20_to_C_30", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
{"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_helpfulness"]}
BigTMiami/D_adapter_seq_bn_classification_P_20_to_C_30
null
[ "adapter-transformers", "roberta", "dataset:BigTMiami/amazon_helpfulness", "region:us" ]
null
2024-04-13T03:18:44+00:00
[]
[]
TAGS #adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
# Adapter 'BigTMiami/D_adapter_seq_bn_classification_P_20_to_C_30' for roberta-base An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification. This adapter was created for usage with the Adapters library. ## Usage First, install 'adapters': Now, the adapter can be loaded and activated like this: ## Architecture & Training ## Evaluation results
[ "# Adapter 'BigTMiami/D_adapter_seq_bn_classification_P_20_to_C_30' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ "TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n", "# Adapter 'BigTMiami/D_adapter_seq_bn_classification_P_20_to_C_30' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
object-detection
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
vrglx333/detr-finetuned-personal-equipment
null
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T03:23:00+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #detr #object-detection #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #detr #object-detection #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
sidmanale643/gemmaenglishtomarathi
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T03:23:37+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cackerman/rewrites_mistral7unsloth_4bit_ft_full_secondft
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T03:33:27+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
null
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-pixelcopter_V2", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "32.30 +/- 27.85", "name": "mean_reward", "verified": false}]}]}]}
pdx97/Reinforce-pixelcopter_V2
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-13T03:35:18+00:00
[]
[]
TAGS #Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing Pixelcopter-PLE-v0 This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
null
adapter-transformers
# Adapter `BigTMiami/E_adapter_seq_bn_inv_classification_P_20_to_C_30` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("BigTMiami/E_adapter_seq_bn_inv_classification_P_20_to_C_30", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
{"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_helpfulness"]}
BigTMiami/E_adapter_seq_bn_inv_classification_P_20_to_C_30
null
[ "adapter-transformers", "roberta", "dataset:BigTMiami/amazon_helpfulness", "region:us" ]
null
2024-04-13T03:36:34+00:00
[]
[]
TAGS #adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
# Adapter 'BigTMiami/E_adapter_seq_bn_inv_classification_P_20_to_C_30' for roberta-base An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification. This adapter was created for usage with the Adapters library. ## Usage First, install 'adapters': Now, the adapter can be loaded and activated like this: ## Architecture & Training ## Evaluation results
[ "# Adapter 'BigTMiami/E_adapter_seq_bn_inv_classification_P_20_to_C_30' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ "TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n", "# Adapter 'BigTMiami/E_adapter_seq_bn_inv_classification_P_20_to_C_30' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-v0.1"}
mille055/duke_chatbot0413_adapter
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-04-13T03:41:07+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
{"library_name": "peft", "base_model": "beomi/polyglot-ko-12.8b-safetensors"}
kasiwoos/polyglot-ko-12.8b-3epochs
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/polyglot-ko-12.8b-safetensors", "region:us" ]
null
2024-04-13T03:42:13+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-beomi/polyglot-ko-12.8b-safetensors #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.1.dev0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.1.dev0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-beomi/polyglot-ko-12.8b-safetensors #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.1.dev0" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Rimyy/Mistral7BFineTuningv1Math
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T03:43:03+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** Jacque008 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-2-13b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-2-13b-bnb-4bit"}
Jacque008/unsloth-llama2-13b-bnb-4bit_10246_ori_refer_fwd_epoch5
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-2-13b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T03:44:41+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-2-13b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: Jacque008 - License: apache-2.0 - Finetuned from model : unsloth/llama-2-13b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: Jacque008\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-13b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-2-13b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: Jacque008\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-13b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
Jacque008/unsloth-llama2-13b-bnb-4bit_10246_ori_refer_fwd_epoch5_tokenizer
null
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T03:44:57+00:00
[ "1910.09700" ]
[]
TAGS #transformers #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** Jacque008 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-2-13b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-2-13b-bnb-4bit"}
Jacque008/unsloth-llama2-13b-bnb-4bit_10246_ori_refer_fwd_epoch5_merge
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-2-13b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-13T03:45:00+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-2-13b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: Jacque008 - License: apache-2.0 - Finetuned from model : unsloth/llama-2-13b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: Jacque008\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-13b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-2-13b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: Jacque008\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-13b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Gordon119/TAT_TD-openai-whisper-large-v2-mix-with-zh-TAT-epoch2-total5epoch
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T03:45:34+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Aswin01/wav2vec2-large-mms-1b-turkish-colab
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-13T03:46:00+00:00
[ "1910.09700" ]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sparse_mistral_7b_refined_web_50p_2024-04-12 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2135 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 4 - seed: 0 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 350 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3391 | 0.01 | 25 | 2.4196 | | 2.2711 | 0.02 | 50 | 2.3577 | | 2.3054 | 0.02 | 75 | 2.3158 | | 2.2795 | 0.03 | 100 | 2.2966 | | 2.3175 | 0.04 | 125 | 2.2846 | | 2.2388 | 0.05 | 150 | 2.2766 | | 2.1679 | 0.06 | 175 | 2.2705 | | 2.2996 | 0.06 | 200 | 2.2678 | | 2.2788 | 0.07 | 225 | 2.2647 | | 2.2448 | 0.08 | 250 | 2.2637 | | 2.1813 | 0.09 | 275 | 2.2619 | | 2.2059 | 0.1 | 300 | 2.2602 | | 2.2689 | 0.1 | 325 | 2.2582 | | 2.2238 | 0.11 | 350 | 2.2579 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "sparse_mistral_7b_refined_web_50p_2024-04-12", "results": []}]}
thrunlab/sparse_mistral_7b_refined_web_50p_2024-04-12
null
[ "transformers", "safetensors", "sparse_llama", "text-generation", "generated_from_trainer", "custom_code", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "region:us" ]
null
2024-04-13T03:46:06+00:00
[]
[]
TAGS #transformers #safetensors #sparse_llama #text-generation #generated_from_trainer #custom_code #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #region-us
sparse\_mistral\_7b\_refined\_web\_50p\_2024-04-12 ================================================== This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.2135 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 1 * eval\_batch\_size: 4 * seed: 0 * distributed\_type: multi-GPU * num\_devices: 4 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 32 * total\_eval\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 350 ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.15.0 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 4\n* seed: 0\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 350", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #safetensors #sparse_llama #text-generation #generated_from_trainer #custom_code #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 4\n* seed: 0\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 350", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: dallonf/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
dallonf/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-04-13T03:53:36+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: dallonf/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: dallonf/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: dallonf/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLS-R-SWAHILI-ASR-CV-14-1B This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice_14_0 dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 0.2794 - Cer: 0.0903 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Cer | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:------:|:---------------:|:------:| | 1.9691 | 0.33 | 400 | 0.2374 | inf | 0.6776 | | 0.5464 | 0.66 | 800 | 0.1758 | inf | 0.5598 | | 0.4909 | 1.0 | 1200 | 0.1680 | inf | 0.5243 | | 0.4263 | 1.33 | 1600 | 0.1502 | inf | 0.4706 | | 0.4047 | 1.66 | 2000 | 0.1580 | inf | 0.4858 | | 0.4054 | 1.99 | 2400 | 0.1426 | inf | 0.4348 | | 0.3542 | 2.32 | 2800 | 0.1340 | inf | 0.4185 | | 0.3525 | 2.66 | 3200 | 0.1400 | inf | 0.4311 | | 0.3359 | 2.99 | 3600 | 0.1308 | inf | 0.4012 | | 0.3006 | 3.32 | 4000 | 0.1278 | inf | 0.3939 | | 0.326 | 1.83 | 4400 | inf | 0.4232 | 0.1362 | | 0.326 | 1.99 | 4800 | inf | 0.4136 | 0.1350 | | 0.3034 | 2.16 | 5200 | inf | 0.4282 | 0.1419 | | 0.2925 | 2.32 | 5600 | inf | 0.3901 | 0.1282 | | 0.2822 | 2.49 | 6000 | inf | 0.3876 | 0.1270 | | 0.2659 | 2.66 | 6400 | inf | 0.3586 | 0.1159 | | 0.2582 | 2.82 | 6800 | inf | 0.3536 | 0.1168 | | 0.2414 | 2.99 | 7200 | inf | 0.3327 | 0.1069 | | 0.208 | 3.15 | 7600 | inf | 0.3249 | 0.1053 | | 0.1934 | 3.32 | 8000 | inf | 0.3120 | 0.1015 | | 0.1881 | 3.49 | 8400 | inf | 0.3058 | 0.0993 | | 0.1774 | 3.65 | 8800 | inf | 0.2962 | 0.0959 | | 0.1736 | 3.82 | 9200 | inf | 0.2902 | 0.0935 | | 0.1679 | 3.98 | 9600 | inf | 0.2843 | 0.0917 | | 0.1436 | 4.15 | 10000 | inf | 0.2794 | 0.0903 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1 - Datasets 2.17.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_14_0"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-xls-r-1b", "model-index": [{"name": "XLS-R-SWAHILI-ASR-CV-14-1B", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "common_voice_14_0", "type": "common_voice_14_0", "config": "sw", "split": "test", "args": "sw"}, "metrics": [{"type": "wer", "value": 0.2794303764906829, "name": "Wer"}]}]}]}
dmusingu/XLS-R-SWAHILI-ASR-CV-14-1B
null
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_14_0", "base_model:facebook/wav2vec2-xls-r-1b", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-13T03:57:30+00:00
[]
[]
TAGS #transformers #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_14_0 #base_model-facebook/wav2vec2-xls-r-1b #license-apache-2.0 #model-index #endpoints_compatible #region-us
XLS-R-SWAHILI-ASR-CV-14-1B ========================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the common\_voice\_14\_0 dataset. It achieves the following results on the evaluation set: * Loss: inf * Wer: 0.2794 * Cer: 0.0903 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 10000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.38.1 * Pytorch 2.2.1 * Datasets 2.17.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 10000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.1\n* Pytorch 2.2.1\n* Datasets 2.17.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_14_0 #base_model-facebook/wav2vec2-xls-r-1b #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 10000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.1\n* Pytorch 2.2.1\n* Datasets 2.17.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) * [arcee-ai/sec-mistral-7b-instruct-1.6-epoch](https://huggingface.co/arcee-ai/sec-mistral-7b-instruct-1.6-epoch) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: arcee-ai/sec-mistral-7b-instruct-1.6-epoch layer_range: [0, 32] - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 layer_range: [0, 32] merge_method: slerp base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["cognitivecomputations/dolphin-2.8-mistral-7b-v02", "arcee-ai/sec-mistral-7b-instruct-1.6-epoch"]}
mergekit-community/mergekit-slerp-kxeioog
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02", "base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-13T03:57:58+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #base_model-arcee-ai/sec-mistral-7b-instruct-1.6-epoch #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * cognitivecomputations/dolphin-2.8-mistral-7b-v02 * arcee-ai/sec-mistral-7b-instruct-1.6-epoch ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02\n* arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #base_model-arcee-ai/sec-mistral-7b-instruct-1.6-epoch #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02\n* arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]