pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tom-brady/sn6_201
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T00:50:17+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Zlovoblachko/Transliteration_L1_sent_generator
null
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T00:52:45+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Van1llaT/SkyWalker
null
[ "transformers", "safetensors", "biogpt", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T00:58:06+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #biogpt #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #biogpt #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
abhayesian/BobzillaV9
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-12T00:59:18+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
yuhuixu/Mistral-7B-Instruct-v0.2-ultrafeedback-v0.2
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T01:03:09+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tom-brady/sn6_200
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T01:04:43+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
automatic-speech-recognition
adapter-transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description Converted using: ct2-transformers-converter --model whisper-large-v2-anime-merged --output_dir whisper-large-v2-anime-merged-ct2 --copy_files tokenizer.json --quantization float16 - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] https://huggingface.co/sin2piusc/whisper-large-v2-anime clu-ling/whisper-large-v2-japanese-5k-steps joujiboi/japanese-anime-speech - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses ### Direct Use Drop it into a subtitle edit whisper model folder and use as if you were using that model. Works better with higher pitched voices. [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"language": ["ja"], "license": "apache-2.0", "library_name": "adapter-transformers", "tags": ["ctranslate2", "faster-whisper ", "anime", "japanese", "whisper"], "datasets": ["joujiboi/japanese-anime-speech", "mozilla-foundation/common_voice_11_0"], "pipeline_tag": "automatic-speech-recognition"}
sin2piusc/whisper-large-v2-anime-ct2
null
[ "adapter-transformers", "ctranslate2", "faster-whisper ", "anime", "japanese", "whisper", "automatic-speech-recognition", "ja", "dataset:joujiboi/japanese-anime-speech", "dataset:mozilla-foundation/common_voice_11_0", "arxiv:1910.09700", "license:apache-2.0", "region:us" ]
null
2024-04-12T01:06:25+00:00
[ "1910.09700" ]
[ "ja" ]
TAGS #adapter-transformers #ctranslate2 #faster-whisper #anime #japanese #whisper #automatic-speech-recognition #ja #dataset-joujiboi/japanese-anime-speech #dataset-mozilla-foundation/common_voice_11_0 #arxiv-1910.09700 #license-apache-2.0 #region-us
# Model Card for Model ID This modelcard aims to be a base template for new models. It has been generated using this raw template. ## Model Details ### Model Description Converted using: ct2-transformers-converter --model whisper-large-v2-anime-merged --output_dir whisper-large-v2-anime-merged-ct2 --copy_files URL --quantization float16 - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] URL clu-ling/whisper-large-v2-japanese-5k-steps joujiboi/japanese-anime-speech - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use Drop it into a subtitle edit whisper model folder and use as if you were using that model. Works better with higher pitched voices. ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.", "## Model Details", "### Model Description\n\nConverted using:\n\nct2-transformers-converter --model whisper-large-v2-anime-merged --output_dir whisper-large-v2-anime-merged-ct2 --copy_files URL --quantization float16\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\nURL\n\nclu-ling/whisper-large-v2-japanese-5k-steps\njoujiboi/japanese-anime-speech\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use\n\nDrop it into a subtitle edit whisper model folder and use as if you were using that model. Works better with higher pitched voices.", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#adapter-transformers #ctranslate2 #faster-whisper #anime #japanese #whisper #automatic-speech-recognition #ja #dataset-joujiboi/japanese-anime-speech #dataset-mozilla-foundation/common_voice_11_0 #arxiv-1910.09700 #license-apache-2.0 #region-us \n", "# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.", "## Model Details", "### Model Description\n\nConverted using:\n\nct2-transformers-converter --model whisper-large-v2-anime-merged --output_dir whisper-large-v2-anime-merged-ct2 --copy_files URL --quantization float16\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\nURL\n\nclu-ling/whisper-large-v2-japanese-5k-steps\njoujiboi/japanese-anime-speech\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use\n\nDrop it into a subtitle edit whisper model folder and use as if you were using that model. Works better with higher pitched voices.", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) * [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Hermes-2-Pro-Mistral-7B - model: WizardLM/WizardMath-7B-V1.1 merge_method: slerp base_model: NousResearch/Hermes-2-Pro-Mistral-7B dtype: bfloat16 parameters: t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["WizardLM/WizardMath-7B-V1.1", "NousResearch/Hermes-2-Pro-Mistral-7B"]}
mergekit-community/mergekit-slerp-qamquir
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:WizardLM/WizardMath-7B-V1.1", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T01:09:37+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-WizardLM/WizardMath-7B-V1.1 #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * WizardLM/WizardMath-7B-V1.1 * NousResearch/Hermes-2-Pro-Mistral-7B ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* WizardLM/WizardMath-7B-V1.1\n* NousResearch/Hermes-2-Pro-Mistral-7B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-WizardLM/WizardMath-7B-V1.1 #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* WizardLM/WizardMath-7B-V1.1\n* NousResearch/Hermes-2-Pro-Mistral-7B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "google/flan-t5-base"}
PraphulSamavedam/flan-t5-base-dialogsum-lora-rank-8
null
[ "peft", "tensorboard", "safetensors", "arxiv:1910.09700", "base_model:google/flan-t5-base", "region:us" ]
null
2024-04-12T01:10:10+00:00
[ "1910.09700" ]
[]
TAGS #peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-google/flan-t5-base #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-google/flan-t5-base #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-chat-hf_fictional_v1 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.16.0 - Tokenizers 0.15.0
{"tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "Llama-2-7b-chat-hf_fictional_v1", "results": []}]}
yzhuang/Llama-2-7b-chat-hf_fictional_v1
null
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "dataset:generator", "base_model:meta-llama/Llama-2-7b-chat-hf", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T01:15:44+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #trl #sft #generated_from_trainer #conversational #dataset-generator #base_model-meta-llama/Llama-2-7b-chat-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Llama-2-7b-chat-hf_fictional_v1 This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.16.0 - Tokenizers 0.15.0
[ "# Llama-2-7b-chat-hf_fictional_v1\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 4\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2\n- Datasets 2.16.0\n- Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #trl #sft #generated_from_trainer #conversational #dataset-generator #base_model-meta-llama/Llama-2-7b-chat-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Llama-2-7b-chat-hf_fictional_v1\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 4\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2\n- Datasets 2.16.0\n- Tokenizers 0.15.0" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"language": ["tl", "en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["MENG21/studfacultyeval_dataset"], "metrics": ["accuracy", "f1", "recall", "precision"], "pipeline_tag": "text-classification", "widget": [{"text": "laging pumapasok si sir!", "example_title": "Tagalog"}, {"text": "I really don't understand the lesson", "example_title": "English"}, {"text": "I love the way na magturo si sir, I always understand the lesson lagi", "example_title": "Tag-lish"}]}
MENG21/stud-fac-eval-bert-large-uncased_v2
null
[ "transformers", "safetensors", "bert", "text-classification", "tl", "en", "dataset:MENG21/studfacultyeval_dataset", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-12T01:15:51+00:00
[ "1910.09700" ]
[ "tl", "en" ]
TAGS #transformers #safetensors #bert #text-classification #tl #en #dataset-MENG21/studfacultyeval_dataset #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #tl #en #dataset-MENG21/studfacultyeval_dataset #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Uploaded model - **Developed by:** chatty123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
chatty123/mistral_rank8_packing
null
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T01:21:24+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: chatty123 - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: chatty123\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: chatty123\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HeydarS/mistral_witQA_peft_v44
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-12T01:24:20+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Fireworks Mixtral 8x22b Instruct OH Fireworks Mixtral 8x22b Instruct OH is an instruct version of the latest MoE model from [mistralai](https://huggingface.co/mistralai) - [8x22b](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1). This model was finetuned on ~10K entries from [OpenHermes](https://huggingface.co/datasets/teknium/OpenHermes-2.5) dataset by [NousResearch](https://huggingface.co/NousResearch). Huge shoutout to [Teknium](https://huggingface.co/teknium) and the NousResearch team for this high-quality SFT dataset. ## How to Get Started with the Model Visit our [Playground](https://fireworks.ai/models/fireworks/mixtral-8x22b-instruct-preview) to try the model on a serverless platform, or access it via [API](https://readme.fireworks.ai/reference/createchatcompletion). Fine-tune it using our [guide](https://readme.fireworks.ai/docs/fine-tuning-models). The model uses Vicuna chat template: ``` USER: ... ASSISTANT: ... ``` or: ``` SYSTEM: ... USER: ... ASSISTANT: ... ``` ### Training Data To save training time, we sample & format 10K samples from [OpenHermes 2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) dataset. We perform reformatting so as to suit the vicuna chat template. Apart from that dataset is used as is. #### Training Setup The model was trained on 8xA100 [80G] using Deepspeed Zero 3. We had to fix an existing bug with deepspeed zero 3 where interaction between zero 3 & MoE causes training to get stuck. ## Model Examination <details Sample 1> <summary>Itinerary Question</summary> <code> SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions. USER: Create a travel itinerary for the bahamas. Spare no detail. ASSISTANT: Sure, I'd be happy to help you plan a trip to the Bahamas! Here's a detailed itinerary for your consideration: Day 1: Arrival in Nassau - Arrive at Lynden Pindling International Airport in Nassau, the capital city of the Bahamas. - Check into your hotel or resort. - Spend the afternoon exploring the city, visiting places like the Queen's Staircase, Fort Fincastle, and the Straw Market. - Enjoy a delicious Bahamian dinner at one of the local restaurants. Day 2: Nassau City Tour and Atlantis Resort - Start your day with a guided city tour of Nassau, visiting historical sites like the Government House, Parliament Square, and the Pirates of Nassau Museum. - In the afternoon, head to the famous Atlantis Resort on Paradise Island. Explore the resort's attractions, such as the Aquaventure Water Park, the Marine Habitat, and the casino. - End the day with a relaxing dinner at one of the resort's restaurants. Day 3: Exuma Cays Day Trip - Take a day trip to the Exuma Cays, a group of over 365 islands and cays. - Visit popular spots like the Swimming Pigs at Big Major Cay, the Thunderball Grotto (featured in the James Bond movie "Thunderball"), and the stunning sandbars. - Enjoy a picnic lunch on one of the pristine beaches. - Return to Nassau in the evening. Day 4: Blue Lagoon Island and Beach Day - Spend the morning at Blue Lagoon Island, a private island just a short boat ride from Nassau. - Enjoy activities like swimming, snorkeling, kayaking, or simply relaxing on the beach. - In the afternoon, head back to Nassau and spend some time at one of the city's beautiful beaches, such as Cable Beach or Junkanoo Beach. - Have dinner at a local seafood restaurant. Day 5: Departure - Depending on your flight schedule, you may have some time to explore Nassau further or do some last-minute shopping before heading to the airport for your departure. I hope this itinerary helps you plan your trip to the Bahamas! Let me know if you have any other questions or need further assistance. </code> </details> <details Sample 2> <summary>QnA Question</summary> <code> SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions. USER: What is the medical term for strong headache on one side of the head? ASSISTANT: The medical term for a strong headache on one side of the head is called a migraine. </code> </details> <details Sample 3> <summary>Knowledge Question</summary> <code> SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions. USER: How can I get stock price of stripe? ASSISTANT: Stripe is a private company and does not have publicly traded stock. Therefore, you cannot get the stock price for Stripe. </code> </details>
{"language": ["en"], "license": "apache-2.0", "datasets": ["teknium/OpenHermes-2.5"], "base_model": "mistral-community/Mixtral-8x22B-v0.1"}
fireworks-ai/mixtral-8x22b-instruct-oh
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T01:24:23+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Fireworks Mixtral 8x22b Instruct OH Fireworks Mixtral 8x22b Instruct OH is an instruct version of the latest MoE model from mistralai - 8x22b. This model was finetuned on ~10K entries from OpenHermes dataset by NousResearch. Huge shoutout to Teknium and the NousResearch team for this high-quality SFT dataset. ## How to Get Started with the Model Visit our Playground to try the model on a serverless platform, or access it via API. Fine-tune it using our guide. The model uses Vicuna chat template: or: ### Training Data To save training time, we sample & format 10K samples from OpenHermes 2.5 dataset. We perform reformatting so as to suit the vicuna chat template. Apart from that dataset is used as is. #### Training Setup The model was trained on 8xA100 [80G] using Deepspeed Zero 3. We had to fix an existing bug with deepspeed zero 3 where interaction between zero 3 & MoE causes training to get stuck. ## Model Examination <details Sample 1> <summary>Itinerary Question</summary> <code> SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions. USER: Create a travel itinerary for the bahamas. Spare no detail. ASSISTANT: Sure, I'd be happy to help you plan a trip to the Bahamas! Here's a detailed itinerary for your consideration: Day 1: Arrival in Nassau - Arrive at Lynden Pindling International Airport in Nassau, the capital city of the Bahamas. - Check into your hotel or resort. - Spend the afternoon exploring the city, visiting places like the Queen's Staircase, Fort Fincastle, and the Straw Market. - Enjoy a delicious Bahamian dinner at one of the local restaurants. Day 2: Nassau City Tour and Atlantis Resort - Start your day with a guided city tour of Nassau, visiting historical sites like the Government House, Parliament Square, and the Pirates of Nassau Museum. - In the afternoon, head to the famous Atlantis Resort on Paradise Island. Explore the resort's attractions, such as the Aquaventure Water Park, the Marine Habitat, and the casino. - End the day with a relaxing dinner at one of the resort's restaurants. Day 3: Exuma Cays Day Trip - Take a day trip to the Exuma Cays, a group of over 365 islands and cays. - Visit popular spots like the Swimming Pigs at Big Major Cay, the Thunderball Grotto (featured in the James Bond movie "Thunderball"), and the stunning sandbars. - Enjoy a picnic lunch on one of the pristine beaches. - Return to Nassau in the evening. Day 4: Blue Lagoon Island and Beach Day - Spend the morning at Blue Lagoon Island, a private island just a short boat ride from Nassau. - Enjoy activities like swimming, snorkeling, kayaking, or simply relaxing on the beach. - In the afternoon, head back to Nassau and spend some time at one of the city's beautiful beaches, such as Cable Beach or Junkanoo Beach. - Have dinner at a local seafood restaurant. Day 5: Departure - Depending on your flight schedule, you may have some time to explore Nassau further or do some last-minute shopping before heading to the airport for your departure. I hope this itinerary helps you plan your trip to the Bahamas! Let me know if you have any other questions or need further assistance. </code> </details> <details Sample 2> <summary>QnA Question</summary> <code> SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions. USER: What is the medical term for strong headache on one side of the head? ASSISTANT: The medical term for a strong headache on one side of the head is called a migraine. </code> </details> <details Sample 3> <summary>Knowledge Question</summary> <code> SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions. USER: How can I get stock price of stripe? ASSISTANT: Stripe is a private company and does not have publicly traded stock. Therefore, you cannot get the stock price for Stripe. </code> </details>
[ "# Model Card for Fireworks Mixtral 8x22b Instruct OH\n\nFireworks Mixtral 8x22b Instruct OH is an instruct version of the latest MoE model from mistralai - 8x22b. This\nmodel was finetuned on ~10K entries from OpenHermes dataset by NousResearch. Huge\nshoutout to Teknium and the NousResearch team for this high-quality SFT dataset.", "## How to Get Started with the Model\n\nVisit our Playground to try the model on a serverless platform, or access it via API. Fine-tune it using our guide.\n\nThe model uses Vicuna chat template:\n\nor:", "### Training Data\n\nTo save training time, we sample & format 10K samples from OpenHermes 2.5 dataset. We perform reformatting so as to \nsuit the vicuna chat template. Apart from that dataset is used as is.", "#### Training Setup\n\nThe model was trained on 8xA100 [80G] using Deepspeed Zero 3. We had to fix an existing bug with deepspeed zero 3 where interaction between zero 3 & MoE causes\ntraining to get stuck.", "## Model Examination\n\n<details Sample 1>\n <summary>Itinerary Question</summary>\n <code>\n SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.\n \n USER: Create a travel itinerary for the bahamas. Spare no detail.\n \n ASSISTANT: Sure, I'd be happy to help you plan a trip to the Bahamas! Here's a detailed itinerary for your consideration:\n \n Day 1: Arrival in Nassau\n - Arrive at Lynden Pindling International Airport in Nassau, the capital city of the Bahamas.\n - Check into your hotel or resort. \n - Spend the afternoon exploring the city, visiting places like the Queen's Staircase, Fort Fincastle, and the Straw Market.\n - Enjoy a delicious Bahamian dinner at one of the local restaurants.\n \n Day 2: Nassau City Tour and Atlantis Resort \n - Start your day with a guided city tour of Nassau, visiting historical sites like the Government House, Parliament Square, and the Pirates of Nassau Museum.\n - In the afternoon, head to the famous Atlantis Resort on Paradise Island. Explore the resort's attractions, such as the Aquaventure Water Park, the Marine Habitat, and the casino.\n - End the day with a relaxing dinner at one of the resort's restaurants. \n \n Day 3: Exuma Cays Day Trip\n - Take a day trip to the Exuma Cays, a group of over 365 islands and cays. \n - Visit popular spots like the Swimming Pigs at Big Major Cay, the Thunderball Grotto (featured in the James Bond movie \"Thunderball\"), and the stunning sandbars.\n - Enjoy a picnic lunch on one of the pristine beaches.\n - Return to Nassau in the evening. \n \n Day 4: Blue Lagoon Island and Beach Day\n - Spend the morning at Blue Lagoon Island, a private island just a short boat ride from Nassau.\n - Enjoy activities like swimming, snorkeling, kayaking, or simply relaxing on the beach.\n - In the afternoon, head back to Nassau and spend some time at one of the city's beautiful beaches, such as Cable Beach or Junkanoo Beach.\n - Have dinner at a local seafood restaurant.\n \n Day 5: Departure \n - Depending on your flight schedule, you may have some time to explore Nassau further or do some last-minute shopping before heading to the airport for your departure.\n \n I hope this itinerary helps you plan your trip to the Bahamas! Let me know if you have any other questions or need further assistance.\n </code>\n</details>\n\n<details Sample 2>\n <summary>QnA Question</summary>\n <code>\n SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.\n\n USER: What is the medical term for strong headache on one side of the head?\n\n ASSISTANT: The medical term for a strong headache on one side of the head is called a migraine.\n </code>\n</details>\n\n<details Sample 3>\n <summary>Knowledge Question</summary>\n <code>\n SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.\n \n USER: How can I get stock price of stripe?\n \n ASSISTANT: Stripe is a private company and does not have publicly traded stock. Therefore, you cannot get the stock price for Stripe.\n </code>\n</details>" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #en #dataset-teknium/OpenHermes-2.5 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Fireworks Mixtral 8x22b Instruct OH\n\nFireworks Mixtral 8x22b Instruct OH is an instruct version of the latest MoE model from mistralai - 8x22b. This\nmodel was finetuned on ~10K entries from OpenHermes dataset by NousResearch. Huge\nshoutout to Teknium and the NousResearch team for this high-quality SFT dataset.", "## How to Get Started with the Model\n\nVisit our Playground to try the model on a serverless platform, or access it via API. Fine-tune it using our guide.\n\nThe model uses Vicuna chat template:\n\nor:", "### Training Data\n\nTo save training time, we sample & format 10K samples from OpenHermes 2.5 dataset. We perform reformatting so as to \nsuit the vicuna chat template. Apart from that dataset is used as is.", "#### Training Setup\n\nThe model was trained on 8xA100 [80G] using Deepspeed Zero 3. We had to fix an existing bug with deepspeed zero 3 where interaction between zero 3 & MoE causes\ntraining to get stuck.", "## Model Examination\n\n<details Sample 1>\n <summary>Itinerary Question</summary>\n <code>\n SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.\n \n USER: Create a travel itinerary for the bahamas. Spare no detail.\n \n ASSISTANT: Sure, I'd be happy to help you plan a trip to the Bahamas! Here's a detailed itinerary for your consideration:\n \n Day 1: Arrival in Nassau\n - Arrive at Lynden Pindling International Airport in Nassau, the capital city of the Bahamas.\n - Check into your hotel or resort. \n - Spend the afternoon exploring the city, visiting places like the Queen's Staircase, Fort Fincastle, and the Straw Market.\n - Enjoy a delicious Bahamian dinner at one of the local restaurants.\n \n Day 2: Nassau City Tour and Atlantis Resort \n - Start your day with a guided city tour of Nassau, visiting historical sites like the Government House, Parliament Square, and the Pirates of Nassau Museum.\n - In the afternoon, head to the famous Atlantis Resort on Paradise Island. Explore the resort's attractions, such as the Aquaventure Water Park, the Marine Habitat, and the casino.\n - End the day with a relaxing dinner at one of the resort's restaurants. \n \n Day 3: Exuma Cays Day Trip\n - Take a day trip to the Exuma Cays, a group of over 365 islands and cays. \n - Visit popular spots like the Swimming Pigs at Big Major Cay, the Thunderball Grotto (featured in the James Bond movie \"Thunderball\"), and the stunning sandbars.\n - Enjoy a picnic lunch on one of the pristine beaches.\n - Return to Nassau in the evening. \n \n Day 4: Blue Lagoon Island and Beach Day\n - Spend the morning at Blue Lagoon Island, a private island just a short boat ride from Nassau.\n - Enjoy activities like swimming, snorkeling, kayaking, or simply relaxing on the beach.\n - In the afternoon, head back to Nassau and spend some time at one of the city's beautiful beaches, such as Cable Beach or Junkanoo Beach.\n - Have dinner at a local seafood restaurant.\n \n Day 5: Departure \n - Depending on your flight schedule, you may have some time to explore Nassau further or do some last-minute shopping before heading to the airport for your departure.\n \n I hope this itinerary helps you plan your trip to the Bahamas! Let me know if you have any other questions or need further assistance.\n </code>\n</details>\n\n<details Sample 2>\n <summary>QnA Question</summary>\n <code>\n SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.\n\n USER: What is the medical term for strong headache on one side of the head?\n\n ASSISTANT: The medical term for a strong headache on one side of the head is called a migraine.\n </code>\n</details>\n\n<details Sample 3>\n <summary>Knowledge Question</summary>\n <code>\n SYSTEM: You are a helpful assistant. Please be kind, patient and truthful in answering the questions.\n \n USER: How can I get stock price of stripe?\n \n ASSISTANT: Stripe is a private company and does not have publicly traded stock. Therefore, you cannot get the stock price for Stripe.\n </code>\n</details>" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
smighani/NeuralHermes-2.5-Mistral-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T01:27:39+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Gemma 2B Zephyr DPO We trained the [google/gemma-2b](https://huggingface.co/google/gemma-2b) with DPO and data from `argilla/dpo-mix-7k`. We carefully selected the hyper-parameters to achieve the best DPO performance. ## Model description - **Model type:** A 2.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English - **License:** Gemma Terms of Use - **Finetuned from model:** [google/gemma-2b](https://huggingface.co/google/gemma-2b) ## License This model has the same license as the [original Gemma model collection](https://ai.google.dev/gemma/terms) ## OpenLLM Leaderboard Performance | Models | Avg. | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8k | |-----------------------------------------|------|-------|-----------|------|------------|------------|-------| | google/gemma-2b | 46.37| 48.38 | 71.77 | 41.77| 33.08 | 66.77 | 16.91 | | google/gemma-2b-it | 42.75| 43.94 | 62.70 | 37.65| 45.82 | 60.93 | 5.46 | | wandb/gemma-2b-zephyr-sft | 47.18| 49.74 | 72.38 | 41.37| 34.42 | **66.93** | 18.27 | | wandb/gemma-2b-zephyr-dpo | 46.92| 49.66 | 72.23 | 41.13| 34.47 | 66.54 | 17.51 | | Columbia-NLP/gemma-2b-zephyr-sft | 48.75| 51.80 | 72.63 | 42.20| 41.96 | 63.85 | **20.09** | | **Columbia-NLP/gemma-2b-zephyr-dpo** | **49.14**| **52.22** | **73.11** | **42.55**| **42.64** | 64.40 | 19.94 | ## MT-Bench We evaluate our model with `GPT-4-0125-preview` as the judge. | Model | Total | Coding | Extraction | Humanities | Math | Reasoning | Roleplay | STEM | Writing | |------------------------------------------|-------|--------|------------|------------|------|-----------|----------|------|---------| | google/gemma-2b-it | 4.71 | 2.95 | **4.35** | 6.15 | 2.90 | 3.50 | 5.60 | **5.50** | **6.70** | | wandb/gemma-2b-zephyr-sft | 4.03 | 3.10 | 3.15 | 5.00 | 2.70 | 2.65 | 5.10 | 4.80 | 5.75 | | wandb/gemma-2b-zephyr-dpo | 4.06 | 2.80 | 2.90 | 5.55 | 2.65 | 2.70 | 5.20 | 4.80 | 5.85 | | anakin87_gemma-2b-orpo | 4.14 | 3.00 | 3.70 | 6.30 | 2.70 | 2.35 | 5.68 | 4.75 | 4.75 | | Columbia-NLP/gemma-2b-zephyr-sft | 4.34 | 3.10 | 3.70 | 6.25 | 2.65 | 2.70 | 5.55 | 5.25 | 5.50 | | **Columbia-NLP/gemma-2b-zephyr-dpo** | **4.75** | **3.50** | 4.05 | **6.75** | **3.30** | **3.70** | **5.85** | 5.40 | 5.53 |
{"license": "other", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer"], "datasets": ["argilla/dpo-mix-7k"], "license_name": "gemma-terms-of-use", "license_link": "https://ai.google.dev/gemma/terms", "base_model": "Columbia-NLP/gemma-2b-zephyr-sft", "model-index": [{"name": "gemma-2b-zephyr-dpo", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 52.22, "name": "normalized accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 73.11, "name": "normalized accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 42.55, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 42.64}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 64.4, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 19.94, "name": "accuracy"}]}]}]}
Columbia-NLP/gemma-2b-zephyr-dpo
null
[ "transformers", "safetensors", "gemma", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:argilla/dpo-mix-7k", "base_model:Columbia-NLP/gemma-2b-zephyr-sft", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T01:28:13+00:00
[]
[]
TAGS #transformers #safetensors #gemma #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-argilla/dpo-mix-7k #base_model-Columbia-NLP/gemma-2b-zephyr-sft #license-other #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Model Card for Gemma 2B Zephyr DPO ================================== We trained the google/gemma-2b with DPO and data from 'argilla/dpo-mix-7k'. We carefully selected the hyper-parameters to achieve the best DPO performance. Model description ----------------- * Model type: A 2.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. * Language(s) (NLP): Primarily English * License: Gemma Terms of Use * Finetuned from model: google/gemma-2b License ------- This model has the same license as the original Gemma model collection OpenLLM Leaderboard Performance ------------------------------- MT-Bench -------- We evaluate our model with 'GPT-4-0125-preview' as the judge.
[]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-argilla/dpo-mix-7k #base_model-Columbia-NLP/gemma-2b-zephyr-sft #license-other #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
null
Model: 42MXT_CrudeEnergy_200KE_FTINST (42INSTCrudeEnergyLLM) Description: Before using this model, please read the disclaimer section. By downloading and using the model you agree to the terms and conditions contained in the disclosure section. 42INSTCrudeEnergyLLM is a fine-tuned language model based on an instruct architecture, specifically designed to assist energy traders in making informed decisions by analyzing and interpreting various types of events that impact crude energy markets. The model has been trained on The model has been trained on approximately 25k individual signals comprising 100K patterns & factors from 5M unique events associated with energy and energy linked assets from 42AI's proprietary graphs. Investment accuracy as measured by trading in crude oil and natural gas futures was around 67%, while in tasks associated with language comprehension and reasoning accuracy was 74%. This is comparable to a mid level - senior level energy analyst & trader. Key Features: Quantamental Analysis: 42INSTCrudeEnergyLLM can process and interpret quantitative and fundamental data related to crude energy markets, such as supply and demand dynamics, production levels, inventory reports, and economic indicators. It can provide insights and recommendations based on this data. It can identify key events, such as company announcements, regulatory changes, and industry developments, and assess their potential impact on crude energy prices by recognizing patterns and generating insights. The model is capable of analyzing technical events and patterns in crude energy price charts, such as support and resistance levels, charts, leverage ratios, CoTs etc. Based on all of the above, it is capable of generating trading signals. Geopolitical Event Analysis: The model is trained to understand and interpret geopolitical events and patterns that can influence crude energy markets, such as political instability, trade agreements, sanctions, and conflicts. It can provide insights into how these events may affect supply chains, demand, and overall market sentiment. Trading Strategies and Risk Management: 42INSTCrudeEnergyLLM can generate trading strategies based on the analysis of various factors - notably non-linear patters events and market conditions specific to the crude, nat gas & energy sector. It can also assist in risk management by identifying potential risks and suggesting appropriate hedging strategies. It can also create market neutral, long short & long only strategies focused on the energy markets. Natural Language Interaction: Users can interact with 42INSTCrudeEnergyLLM using natural language queries and instructions. The model can provide clear and concise responses, explanations, and recommendations in a user-friendly manner, tailored to the crude energy trading domain. By leveraging the power of instruct-based learning and fine-tuning on crude energy-specific data, 42INSTCrudeEnergyLLM aims to empower crude energy traders with valuable insights, analysis, and trading recommendations. The model needs a real time data source which feeds it with market information. Such a real time data source maybe a human energy trader, a MRN - a machine readable news feed or a traditional market data source such as Bloomberg or Refinitiv. The model's ability to process and interpret diverse types of events and data sources makes it a comprehensive tool for navigating the complex and dynamic crude energy markets. Disclaimer: The 42MXT_CrudeEnergy_200KE_FTINST (42INSTCrudeEnergyLLM) model is an open-source language model designed for research and informational purposes only. The model's outputs, including but not limited to insights, analysis, trading signals, strategies, and recommendations, are not intended to constitute investment advice, financial advice, or trading advice in any form. The use of this model for day-to-day trading is strongly discouraged. Users of the 42INSTCrudeEnergyLLM model must exercise caution and critically examine the model's outputs before making any trading or investment decisions. The model's outputs may include patterns or predictions that attempt to forecast future market behavior, which may not be suitable or reliable for backtesting or real-world trading scenarios. Before using the 42INSTCrudeEnergyLLM model in any production environment or real-world trading scenario, it is imperative that users seek the guidance and expertise of qualified energy consultants, advisers, and experienced traders. The model's outputs should be treated as one of many inputs in the decision-making process and should not be relied upon as the sole basis for any trading or investment decisions. The authors, developers, and contributors of the 42INSTCrudeEnergyLLM model shall not be held liable, under any circumstances, for any losses, damages, or adverse outcomes arising from the use of this model, including but not limited to financial losses, trading losses, or any other form of direct or indirect losses. Users of the model assume full responsibility for their actions and decisions based on the model's outputs. By using the 42INSTCrudeEnergyLLM model, users acknowledge and agree to this disclaimer and understand that the model is provided "as is" without any warranties, expressed or implied. The authors, developers, and contributors of the model disclaim any liability or responsibility for the accuracy, completeness, or reliability of the model's outputs or any consequences arising from its use. It is the sole responsibility of the users to ensure compliance with all applicable laws, regulations, and guidelines in their respective jurisdictions when using the 42INSTCrudeEnergyLLM model or acting upon its outputs.
{"language": ["en"], "license": "apache-2.0", "tags": ["finance", "text-generation-inference"]}
heliostar9000/42MXT_CrudeEnergy_200KE_FTINST
null
[ "safetensors", "finance", "text-generation-inference", "en", "license:apache-2.0", "region:us" ]
null
2024-04-12T01:32:33+00:00
[]
[ "en" ]
TAGS #safetensors #finance #text-generation-inference #en #license-apache-2.0 #region-us
Model: 42MXT_CrudeEnergy_200KE_FTINST (42INSTCrudeEnergyLLM) Description: Before using this model, please read the disclaimer section. By downloading and using the model you agree to the terms and conditions contained in the disclosure section. 42INSTCrudeEnergyLLM is a fine-tuned language model based on an instruct architecture, specifically designed to assist energy traders in making informed decisions by analyzing and interpreting various types of events that impact crude energy markets. The model has been trained on The model has been trained on approximately 25k individual signals comprising 100K patterns & factors from 5M unique events associated with energy and energy linked assets from 42AI's proprietary graphs. Investment accuracy as measured by trading in crude oil and natural gas futures was around 67%, while in tasks associated with language comprehension and reasoning accuracy was 74%. This is comparable to a mid level - senior level energy analyst & trader. Key Features: Quantamental Analysis: 42INSTCrudeEnergyLLM can process and interpret quantitative and fundamental data related to crude energy markets, such as supply and demand dynamics, production levels, inventory reports, and economic indicators. It can provide insights and recommendations based on this data. It can identify key events, such as company announcements, regulatory changes, and industry developments, and assess their potential impact on crude energy prices by recognizing patterns and generating insights. The model is capable of analyzing technical events and patterns in crude energy price charts, such as support and resistance levels, charts, leverage ratios, CoTs etc. Based on all of the above, it is capable of generating trading signals. Geopolitical Event Analysis: The model is trained to understand and interpret geopolitical events and patterns that can influence crude energy markets, such as political instability, trade agreements, sanctions, and conflicts. It can provide insights into how these events may affect supply chains, demand, and overall market sentiment. Trading Strategies and Risk Management: 42INSTCrudeEnergyLLM can generate trading strategies based on the analysis of various factors - notably non-linear patters events and market conditions specific to the crude, nat gas & energy sector. It can also assist in risk management by identifying potential risks and suggesting appropriate hedging strategies. It can also create market neutral, long short & long only strategies focused on the energy markets. Natural Language Interaction: Users can interact with 42INSTCrudeEnergyLLM using natural language queries and instructions. The model can provide clear and concise responses, explanations, and recommendations in a user-friendly manner, tailored to the crude energy trading domain. By leveraging the power of instruct-based learning and fine-tuning on crude energy-specific data, 42INSTCrudeEnergyLLM aims to empower crude energy traders with valuable insights, analysis, and trading recommendations. The model needs a real time data source which feeds it with market information. Such a real time data source maybe a human energy trader, a MRN - a machine readable news feed or a traditional market data source such as Bloomberg or Refinitiv. The model's ability to process and interpret diverse types of events and data sources makes it a comprehensive tool for navigating the complex and dynamic crude energy markets. Disclaimer: The 42MXT_CrudeEnergy_200KE_FTINST (42INSTCrudeEnergyLLM) model is an open-source language model designed for research and informational purposes only. The model's outputs, including but not limited to insights, analysis, trading signals, strategies, and recommendations, are not intended to constitute investment advice, financial advice, or trading advice in any form. The use of this model for day-to-day trading is strongly discouraged. Users of the 42INSTCrudeEnergyLLM model must exercise caution and critically examine the model's outputs before making any trading or investment decisions. The model's outputs may include patterns or predictions that attempt to forecast future market behavior, which may not be suitable or reliable for backtesting or real-world trading scenarios. Before using the 42INSTCrudeEnergyLLM model in any production environment or real-world trading scenario, it is imperative that users seek the guidance and expertise of qualified energy consultants, advisers, and experienced traders. The model's outputs should be treated as one of many inputs in the decision-making process and should not be relied upon as the sole basis for any trading or investment decisions. The authors, developers, and contributors of the 42INSTCrudeEnergyLLM model shall not be held liable, under any circumstances, for any losses, damages, or adverse outcomes arising from the use of this model, including but not limited to financial losses, trading losses, or any other form of direct or indirect losses. Users of the model assume full responsibility for their actions and decisions based on the model's outputs. By using the 42INSTCrudeEnergyLLM model, users acknowledge and agree to this disclaimer and understand that the model is provided "as is" without any warranties, expressed or implied. The authors, developers, and contributors of the model disclaim any liability or responsibility for the accuracy, completeness, or reliability of the model's outputs or any consequences arising from its use. It is the sole responsibility of the users to ensure compliance with all applicable laws, regulations, and guidelines in their respective jurisdictions when using the 42INSTCrudeEnergyLLM model or acting upon its outputs.
[]
[ "TAGS\n#safetensors #finance #text-generation-inference #en #license-apache-2.0 #region-us \n" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
{"library_name": "peft", "base_model": "facebook/bart-base"}
Naying0206/b2b-lora-ar
null
[ "peft", "tensorboard", "safetensors", "arxiv:1910.09700", "base_model:facebook/bart-base", "region:us" ]
null
2024-04-12T01:36:23+00:00
[ "1910.09700" ]
[]
TAGS #peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-facebook/bart-base #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.1.dev0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.1.dev0" ]
[ "TAGS\n#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-facebook/bart-base #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.1.dev0" ]
text-generation
transformers
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/qDMj9C-ApFgD9KfoVigyW.jpeg) # Quants thanks to Lewdiculus: https://huggingface.co/Lewdiculous/Nyanade_Stunna-Maid-7B-GGUF-IQ-Imatrix # Vision/multimodal capabilities: If you want to use vision functionality: * You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp). To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. * You can load the **mmproj** by using the corresponding section in the interface: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
{"license": "other", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Nitral-AI/Nyanade-Maid-7B", "Nitral-AI/Nyan-Stunna-7B"]}
Nitral-AI/Nyanade_Stunna-Maid-7B
null
[ "transformers", "safetensors", "gguf", "mistral", "text-generation", "mergekit", "merge", "base_model:Nitral-AI/Nyanade-Maid-7B", "base_model:Nitral-AI/Nyan-Stunna-7B", "license:other", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2024-04-12T01:39:15+00:00
[]
[]
TAGS #transformers #safetensors #gguf #mistral #text-generation #mergekit #merge #base_model-Nitral-AI/Nyanade-Maid-7B #base_model-Nitral-AI/Nyan-Stunna-7B #license-other #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
!image/jpeg # Quants thanks to Lewdiculus: URL # Vision/multimodal capabilities: If you want to use vision functionality: * You must use the latest versions of Koboldcpp. To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo. * You can load the mmproj by using the corresponding section in the interface: !image/png
[ "# Quants thanks to Lewdiculus: URL", "# Vision/multimodal capabilities:\n\n If you want to use vision functionality:\n\n * You must use the latest versions of Koboldcpp.\n \nTo use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo.\n \n * You can load the mmproj by using the corresponding section in the interface:\n\n !image/png" ]
[ "TAGS\n#transformers #safetensors #gguf #mistral #text-generation #mergekit #merge #base_model-Nitral-AI/Nyanade-Maid-7B #base_model-Nitral-AI/Nyan-Stunna-7B #license-other #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# Quants thanks to Lewdiculus: URL", "# Vision/multimodal capabilities:\n\n If you want to use vision functionality:\n\n * You must use the latest versions of Koboldcpp.\n \nTo use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo.\n \n * You can load the mmproj by using the corresponding section in the interface:\n\n !image/png" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-7b-chat-reducedTest-comm-liberal This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2200 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4400 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "llama-7b-chat-reducedTest-comm-liberal", "results": []}]}
Niyantha23M/llama-7b-chat-reducedTest-comm-liberal
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-04-12T01:40:22+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
# llama-7b-chat-reducedTest-comm-liberal This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2200 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4400 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
[ "# llama-7b-chat-reducedTest-comm-liberal\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2200\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4400\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.2.0\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n", "# llama-7b-chat-reducedTest-comm-liberal\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2200\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4400\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.2.0\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"language": ["tl", "en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["MENG21/studfacultyeval_dataset"], "metrics": ["accuracy", "f1", "recall", "precision"], "pipeline_tag": "text-classification", "widget": [{"text": "laging pumapasok si sir!", "example_title": "Tagalog"}, {"text": "I really don't understand the lesson", "example_title": "English"}, {"text": "I love the way na magturo si sir, I always understand the lesson lagi", "example_title": "Tag-lish"}]}
MENG21/stud-fac-eval-distilbert-base-uncased_v2
null
[ "transformers", "safetensors", "distilbert", "text-classification", "tl", "en", "dataset:MENG21/studfacultyeval_dataset", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-12T01:40:27+00:00
[ "1910.09700" ]
[ "tl", "en" ]
TAGS #transformers #safetensors #distilbert #text-classification #tl #en #dataset-MENG21/studfacultyeval_dataset #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #distilbert #text-classification #tl #en #dataset-MENG21/studfacultyeval_dataset #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # experiments This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7238 | 0.62 | 2 | 2.7042 | | 3.107 | 1.23 | 4 | 2.5366 | | 2.9896 | 1.85 | 6 | 2.4811 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["trl", "sft", "test_LLama", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "experiments", "results": []}]}
ShushantLLM/experiments
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "test_LLama", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-04-12T01:41:02+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #test_LLama #generated_from_trainer #base_model-meta-llama/Llama-2-7b-hf #region-us
experiments =========== This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.4811 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.05 * num\_epochs: 2 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #test_LLama #generated_from_trainer #base_model-meta-llama/Llama-2-7b-hf #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
{"library_name": "peft", "base_model": "Viet-Mistral/Vistral-7B-Chat"}
MinhLongBui/Vistral-7b-retails-v2
null
[ "peft", "safetensors", "mistral", "arxiv:1910.09700", "base_model:Viet-Mistral/Vistral-7B-Chat", "region:us" ]
null
2024-04-12T01:41:32+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #mistral #arxiv-1910.09700 #base_model-Viet-Mistral/Vistral-7B-Chat #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.8.2
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.8.2" ]
[ "TAGS\n#peft #safetensors #mistral #arxiv-1910.09700 #base_model-Viet-Mistral/Vistral-7B-Chat #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.8.2" ]
text-generation
transformers
# Uploaded model - **Developed by:** chatty123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "dpo"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
chatty123/mistral_rank16_dpo
null
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "dpo", "conversational", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T01:41:55+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #dpo #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: chatty123 - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: chatty123\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #dpo #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: chatty123\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HeydarS/zephyr_witQA_peft_v45
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-12T01:41:59+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
adapter-transformers
# Adapter `BigTMiami/adapter_classification_no_pretraining_20_percent` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("BigTMiami/adapter_classification_no_pretraining_20_percent", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
{"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_helpfulness"]}
BigTMiami/adapter_classification_no_pretraining_20_percent
null
[ "adapter-transformers", "roberta", "dataset:BigTMiami/amazon_helpfulness", "region:us" ]
null
2024-04-12T01:42:49+00:00
[]
[]
TAGS #adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
# Adapter 'BigTMiami/adapter_classification_no_pretraining_20_percent' for roberta-base An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification. This adapter was created for usage with the Adapters library. ## Usage First, install 'adapters': Now, the adapter can be loaded and activated like this: ## Architecture & Training ## Evaluation results
[ "# Adapter 'BigTMiami/adapter_classification_no_pretraining_20_percent' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ "TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n", "# Adapter 'BigTMiami/adapter_classification_no_pretraining_20_percent' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ayman56/llama2_finetuned_full_stackoverflow_test
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-12T01:42:55+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-7b-chat-dummy This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2200 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4400 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.13.3
{"tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "llama-7b-chat-dummy", "results": []}]}
Niyantha23M/llama-7b-chat-dummy
null
[ "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-04-12T01:44:24+00:00
[]
[]
TAGS #trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
# llama-7b-chat-dummy This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2200 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4400 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.13.3
[ "# llama-7b-chat-dummy\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2200\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4400\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- Transformers 4.33.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.13.3" ]
[ "TAGS\n#trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n", "# llama-7b-chat-dummy\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2200\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4400\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- Transformers 4.33.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.13.3" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Abirate/quantized-gemma-1.1-7b-it-finetuned-on-kaggle-writeups
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-12T01:46:45+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_3 This model is a fine-tuned version of [ZhangShenao/0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_2](https://huggingface.co/ZhangShenao/0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_2) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ZhangShenao/0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_2", "model-index": [{"name": "0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_3", "results": []}]}
ZhangShenao/0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_3
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ZhangShenao/0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T01:48:48+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ZhangShenao/0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_3 This model is a fine-tuned version of ZhangShenao/0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_2 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_3\n\nThis model is a fine-tuned version of ZhangShenao/0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_2 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ZhangShenao/0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_3\n\nThis model is a fine-tuned version of ZhangShenao/0.0001_idpo_same_nodpo_noreplacerej_6iters_iter_2 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
sentence-similarity
sentence-transformers
# luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-2e-7-m-1-e-3 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-2e-7-m-1-e-3') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-2e-7-m-1-e-3) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 5885 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.COSINE', 'triplet_margin': 1} ``` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-07 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 2648, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-2e-7-m-1-e-3
null
[ "sentence-transformers", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "endpoints_compatible", "region:us" ]
null
2024-04-12T01:49:23+00:00
[]
[]
TAGS #sentence-transformers #safetensors #mpnet #feature-extraction #sentence-similarity #endpoints_compatible #region-us
# luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-2e-7-m-1-e-3 This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 5885 with parameters: Loss: 'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters: Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors
[ "# luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-2e-7-m-1-e-3\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 5885 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #safetensors #mpnet #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n", "# luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-2e-7-m-1-e-3\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 5885 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan_sum_04-12-01-45-51 This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/flan-t5-base", "model-index": [{"name": "flan_sum_04-12-01-45-51", "results": []}]}
reeddg/flan_sum_04-12-01-45-51
null
[ "tensorboard", "generated_from_trainer", "base_model:google/flan-t5-base", "license:apache-2.0", "region:us" ]
null
2024-04-12T01:50:32+00:00
[]
[]
TAGS #tensorboard #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #region-us
# flan_sum_04-12-01-45-51 This model is a fine-tuned version of google/flan-t5-base on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3
[ "# flan_sum_04-12-01-45-51\n\nThis model is a fine-tuned version of google/flan-t5-base on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10", "### Training results", "### Framework versions\n\n- Transformers 4.31.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.13.3" ]
[ "TAGS\n#tensorboard #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #region-us \n", "# flan_sum_04-12-01-45-51\n\nThis model is a fine-tuned version of google/flan-t5-base on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10", "### Training results", "### Framework versions\n\n- Transformers 4.31.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.13.3" ]
null
peft
# DavidAU/Valor-7B-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from [`NeuralNovel/Valor-7B-v0.1`](https://huggingface.co/NeuralNovel/Valor-7B-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NeuralNovel/Valor-7B-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Valor-7B-v0.1-Q4_K_M-GGUF --model valor-7b-v0.1.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Valor-7B-v0.1-Q4_K_M-GGUF --model valor-7b-v0.1.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m valor-7b-v0.1.Q4_K_M.gguf -n 128 ```
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer", "llama-cpp", "gguf-my-repo"], "datasets": ["NeuralNovel/Neural-Story-v1"], "base_model": "alnrg2arg/blockchainlabs_7B_merged_test2_4", "model-index": [{"name": "qlora-out", "results": []}]}
DavidAU/Valor-7B-v0.1-Q4_K_M-GGUF
null
[ "peft", "gguf", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "dataset:NeuralNovel/Neural-Story-v1", "base_model:alnrg2arg/blockchainlabs_7B_merged_test2_4", "license:apache-2.0", "region:us" ]
null
2024-04-12T01:51:13+00:00
[]
[]
TAGS #peft #gguf #generated_from_trainer #llama-cpp #gguf-my-repo #dataset-NeuralNovel/Neural-Story-v1 #base_model-alnrg2arg/blockchainlabs_7B_merged_test2_4 #license-apache-2.0 #region-us
# DavidAU/Valor-7B-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from 'NeuralNovel/Valor-7B-v0.1' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Valor-7B-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Valor-7B-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#peft #gguf #generated_from_trainer #llama-cpp #gguf-my-repo #dataset-NeuralNovel/Neural-Story-v1 #base_model-alnrg2arg/blockchainlabs_7B_merged_test2_4 #license-apache-2.0 #region-us \n", "# DavidAU/Valor-7B-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Valor-7B-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/Tiger-7B-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from [`NeuralNovel/Tiger-7B-v0.1`](https://huggingface.co/NeuralNovel/Tiger-7B-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NeuralNovel/Tiger-7B-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Tiger-7B-v0.1-Q4_K_M-GGUF --model tiger-7b-v0.1.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Tiger-7B-v0.1-Q4_K_M-GGUF --model tiger-7b-v0.1.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiger-7b-v0.1.Q4_K_M.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs", "NeuralNovel/Neural-Story-v1"], "base_model": ["NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story", "NeuralNovel/Gecko-7B-v0.1-DPO"], "model-index": [{"name": "Tiger-7b-v0.1", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 59.98, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tiger-7b-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 83.21, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tiger-7b-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 61.42, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tiger-7b-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 61.03}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tiger-7b-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 77.66, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tiger-7b-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 46.78, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tiger-7b-v0.1", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/Tiger-7B-v0.1-Q4_K_M-GGUF
null
[ "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "dataset:Intel/orca_dpo_pairs", "dataset:NeuralNovel/Neural-Story-v1", "base_model:NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story", "base_model:NeuralNovel/Gecko-7B-v0.1-DPO", "license:apache-2.0", "model-index", "region:us" ]
null
2024-04-12T01:52:37+00:00
[]
[]
TAGS #gguf #mergekit #merge #llama-cpp #gguf-my-repo #dataset-Intel/orca_dpo_pairs #dataset-NeuralNovel/Neural-Story-v1 #base_model-NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story #base_model-NeuralNovel/Gecko-7B-v0.1-DPO #license-apache-2.0 #model-index #region-us
# DavidAU/Tiger-7B-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from 'NeuralNovel/Tiger-7B-v0.1' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Tiger-7B-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Tiger-7B-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #mergekit #merge #llama-cpp #gguf-my-repo #dataset-Intel/orca_dpo_pairs #dataset-NeuralNovel/Neural-Story-v1 #base_model-NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story #base_model-NeuralNovel/Gecko-7B-v0.1-DPO #license-apache-2.0 #model-index #region-us \n", "# DavidAU/Tiger-7B-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Tiger-7B-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# DavidAU/Gecko-7B-v0.1-DPO-Q4_K_M-GGUF This model was converted to GGUF format from [`NeuralNovel/Gecko-7B-v0.1-DPO`](https://huggingface.co/NeuralNovel/Gecko-7B-v0.1-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NeuralNovel/Gecko-7B-v0.1-DPO) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Gecko-7B-v0.1-DPO-Q4_K_M-GGUF --model gecko-7b-v0.1-dpo.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Gecko-7B-v0.1-DPO-Q4_K_M-GGUF --model gecko-7b-v0.1-dpo.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gecko-7b-v0.1-dpo.Q4_K_M.gguf -n 128 ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs"], "base_model": "NeuralNovel/Gecko-7B-v0.1", "inference": false, "model-index": [{"name": "Gecko-7B-v0.1-DPO", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 56.74, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1-DPO", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 82.38, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1-DPO", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 60.42, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1-DPO", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 57.42}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1-DPO", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 77.35, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1-DPO", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 45.03, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1-DPO", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/Gecko-7B-v0.1-DPO-Q4_K_M-GGUF
null
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "dataset:Intel/orca_dpo_pairs", "base_model:NeuralNovel/Gecko-7B-v0.1", "license:apache-2.0", "model-index", "region:us" ]
null
2024-04-12T01:54:20+00:00
[]
[]
TAGS #transformers #gguf #llama-cpp #gguf-my-repo #dataset-Intel/orca_dpo_pairs #base_model-NeuralNovel/Gecko-7B-v0.1 #license-apache-2.0 #model-index #region-us
# DavidAU/Gecko-7B-v0.1-DPO-Q4_K_M-GGUF This model was converted to GGUF format from 'NeuralNovel/Gecko-7B-v0.1-DPO' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Gecko-7B-v0.1-DPO-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Gecko-7B-v0.1-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #dataset-Intel/orca_dpo_pairs #base_model-NeuralNovel/Gecko-7B-v0.1 #license-apache-2.0 #model-index #region-us \n", "# DavidAU/Gecko-7B-v0.1-DPO-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Gecko-7B-v0.1-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-to-image
diffusers
# Shitty Doodle Drawings by EZG <Gallery /> ## Trigger words You should use `ezgdrw style` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/EgadZoundsGadzooks/doodle-drawings-lora/tree/main) them in the Files & versions tab.
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "-", "output": {"url": "images/GIeYWySOY44Is0rd2Ds8a.jpeg"}}], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "ezgdrw style"}
EgadZoundsGadzooks/doodle-drawings-lora
null
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
null
2024-04-12T01:55:15+00:00
[]
[]
TAGS #diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us
# Shitty Doodle Drawings by EZG <Gallery /> ## Trigger words You should use 'ezgdrw style' to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
[ "# Shitty Doodle Drawings by EZG\n\n<Gallery />", "## Trigger words\n\nYou should use 'ezgdrw style' to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
[ "TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us \n", "# Shitty Doodle Drawings by EZG\n\n<Gallery />", "## Trigger words\n\nYou should use 'ezgdrw style' to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Labagaite/mistral-Summarizer-7b-instruct-v0.2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "summarizer", "16bit"], "base_model": "Labagaite/mistral-Summarizer-7b-instruct-v0.2", "quantized_by": "mradermacher"}
mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF
null
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "summarizer", "16bit", "en", "base_model:Labagaite/mistral-Summarizer-7b-instruct-v0.2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-12T01:55:38+00:00
[]
[ "en" ]
TAGS #transformers #gguf #text-generation-inference #unsloth #mistral #summarizer #16bit #en #base_model-Labagaite/mistral-Summarizer-7b-instruct-v0.2 #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #text-generation-inference #unsloth #mistral #summarizer #16bit #en #base_model-Labagaite/mistral-Summarizer-7b-instruct-v0.2 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="amine-01/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
amine-01/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-12T01:58:57+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HeydarS/llama2_EQ_peft_v48
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-12T01:59:14+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
This model has been pushed to the Hub using ****: - Repo: [More Information Needed] - Docs: [More Information Needed]
{"tags": ["pytorch_model_hub_mixin", "model_hub_mixin"]}
tjl223/artist-coherency-ffnn
null
[ "transformers", "pytorch", "safetensors", "pytorch_model_hub_mixin", "model_hub_mixin", "endpoints_compatible", "region:us" ]
null
2024-04-12T02:00:41+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #pytorch_model_hub_mixin #model_hub_mixin #endpoints_compatible #region-us
This model has been pushed to the Hub using : - Repo: - Docs:
[]
[ "TAGS\n#transformers #pytorch #safetensors #pytorch_model_hub_mixin #model_hub_mixin #endpoints_compatible #region-us \n" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="amine-01/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.54 +/- 2.69", "name": "mean_reward", "verified": false}]}]}]}
amine-01/Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-12T02:01:17+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # stepanom_model This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2797 - Accuracy: 0.9348 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.937e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.3672 | 1.0 | 539 | 0.7685 | 0.8328 | | 0.5966 | 2.0 | 1078 | 0.4007 | 0.8982 | | 0.3074 | 3.0 | 1617 | 0.3301 | 0.9149 | | 0.1851 | 4.0 | 2156 | 0.3010 | 0.9249 | | 0.1099 | 5.0 | 2695 | 0.2945 | 0.9274 | | 0.0758 | 6.0 | 3234 | 0.2832 | 0.9321 | | 0.044 | 7.0 | 3773 | 0.2797 | 0.9348 | | 0.0285 | 8.0 | 4312 | 0.2868 | 0.9337 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "aubmindlab/bert-base-arabertv02", "widget": [{"text": "\u0647\u0644 \u064a\u0645\u0643\u0646\u0643 \u062a\u062a\u0628\u0639 \u0628\u0637\u0627\u0642\u062a\u064a \u0645\u0646 \u0623\u062c\u0644\u064a\u061f"}], "model-index": [{"name": "stepanom_model", "results": []}]}
stepanom/stepanom_model
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T02:01:45+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-aubmindlab/bert-base-arabertv02 #autotrain_compatible #endpoints_compatible #region-us
stepanom\_model =============== This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.2797 * Accuracy: 0.9348 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2.937e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 8 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2.937e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-aubmindlab/bert-base-arabertv02 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2.937e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# DavidAU/Gecko-7B-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from [`NeuralNovel/Gecko-7B-v0.1`](https://huggingface.co/NeuralNovel/Gecko-7B-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NeuralNovel/Gecko-7B-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Gecko-7B-v0.1-Q4_K_M-GGUF --model gecko-7b-v0.1.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Gecko-7B-v0.1-Q4_K_M-GGUF --model gecko-7b-v0.1.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gecko-7b-v0.1.Q4_K_M.gguf -n 128 ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "inference": false, "model-index": [{"name": "Gecko-7B-v0.1", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 61.35, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 83.36, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 61.05, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 62.6}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 77.58, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 41.55, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/Gecko-7B-v0.1-Q4_K_M-GGUF
null
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "model-index", "region:us" ]
null
2024-04-12T02:07:23+00:00
[]
[]
TAGS #transformers #gguf #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #model-index #region-us
# DavidAU/Gecko-7B-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from 'NeuralNovel/Gecko-7B-v0.1' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Gecko-7B-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Gecko-7B-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #model-index #region-us \n", "# DavidAU/Gecko-7B-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Gecko-7B-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
GGUF quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Qwen1.5-7B-Chat - GGUF - Model creator: https://huggingface.co/Qwen/ - Original model: https://huggingface.co/Qwen/Qwen1.5-7B-Chat/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Qwen1.5-7B-Chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q2_K.gguf) | Q2_K | 2.89GB | | [Qwen1.5-7B-Chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.IQ3_XS.gguf) | IQ3_XS | 3.18GB | | [Qwen1.5-7B-Chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.IQ3_S.gguf) | IQ3_S | 3.32GB | | [Qwen1.5-7B-Chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q3_K_S.gguf) | Q3_K_S | 3.32GB | | [Qwen1.5-7B-Chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.IQ3_M.gguf) | IQ3_M | 3.48GB | | [Qwen1.5-7B-Chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q3_K.gguf) | Q3_K | 3.65GB | | [Qwen1.5-7B-Chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q3_K_M.gguf) | Q3_K_M | 3.65GB | | [Qwen1.5-7B-Chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q3_K_L.gguf) | Q3_K_L | 3.93GB | | [Qwen1.5-7B-Chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.IQ4_XS.gguf) | IQ4_XS | 4.02GB | | [Qwen1.5-7B-Chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q4_0.gguf) | Q4_0 | 4.2GB | | [Qwen1.5-7B-Chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.IQ4_NL.gguf) | IQ4_NL | 4.22GB | | [Qwen1.5-7B-Chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q4_K_S.gguf) | Q4_K_S | 4.23GB | | [Qwen1.5-7B-Chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q4_K.gguf) | Q4_K | 4.44GB | | [Qwen1.5-7B-Chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q4_K_M.gguf) | Q4_K_M | 4.44GB | | [Qwen1.5-7B-Chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q4_1.gguf) | Q4_1 | 4.62GB | | [Qwen1.5-7B-Chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q5_0.gguf) | Q5_0 | 5.03GB | | [Qwen1.5-7B-Chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q5_K_S.gguf) | Q5_K_S | 5.03GB | | [Qwen1.5-7B-Chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q5_K.gguf) | Q5_K | 5.15GB | | [Qwen1.5-7B-Chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q5_K_M.gguf) | Q5_K_M | 5.15GB | | [Qwen1.5-7B-Chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q5_1.gguf) | Q5_1 | 5.44GB | | [Qwen1.5-7B-Chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf/blob/main/Qwen1.5-7B-Chat.Q6_K.gguf) | Q6_K | 5.91GB | Original model description: --- license: other license_name: tongyi-qianwen license_link: >- https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen1.5-7B-Chat ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-7B-Chat", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-7B-Chat") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-7B-Chat-GPTQ-Int4`, `Qwen1.5-7B-Chat-GPTQ-Int8`, `Qwen1.5-7B-Chat-AWQ`, and `Qwen1.5-7B-Chat-GGUF`. ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
{}
RichardErkhov/Qwen_-_Qwen1.5-7B-Chat-gguf
null
[ "gguf", "region:us" ]
null
2024-04-12T02:08:40+00:00
[]
[]
TAGS #gguf #region-us
GGUF quantization made by Richard Erkhov. Github Discord Request more models Qwen1.5-7B-Chat - GGUF * Model creator: URL * Original model: URL Name: Qwen1.5-7B-Chat.Q2\_K.gguf, Quant method: Q2\_K, Size: 2.89GB Name: Qwen1.5-7B-Chat.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 3.18GB Name: Qwen1.5-7B-Chat.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 3.32GB Name: Qwen1.5-7B-Chat.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 3.32GB Name: Qwen1.5-7B-Chat.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 3.48GB Name: Qwen1.5-7B-Chat.Q3\_K.gguf, Quant method: Q3\_K, Size: 3.65GB Name: Qwen1.5-7B-Chat.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 3.65GB Name: Qwen1.5-7B-Chat.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 3.93GB Name: Qwen1.5-7B-Chat.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 4.02GB Name: Qwen1.5-7B-Chat.Q4\_0.gguf, Quant method: Q4\_0, Size: 4.2GB Name: Qwen1.5-7B-Chat.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 4.22GB Name: Qwen1.5-7B-Chat.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 4.23GB Name: Qwen1.5-7B-Chat.Q4\_K.gguf, Quant method: Q4\_K, Size: 4.44GB Name: Qwen1.5-7B-Chat.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 4.44GB Name: Qwen1.5-7B-Chat.Q4\_1.gguf, Quant method: Q4\_1, Size: 4.62GB Name: Qwen1.5-7B-Chat.Q5\_0.gguf, Quant method: Q5\_0, Size: 5.03GB Name: Qwen1.5-7B-Chat.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 5.03GB Name: Qwen1.5-7B-Chat.Q5\_K.gguf, Quant method: Q5\_K, Size: 5.15GB Name: Qwen1.5-7B-Chat.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 5.15GB Name: Qwen1.5-7B-Chat.Q5\_1.gguf, Quant method: Q5\_1, Size: 5.44GB Name: Qwen1.5-7B-Chat.Q6\_K.gguf, Quant method: Q6\_K, Size: 5.91GB ``` Original model description: --- ``` license: other license\_name: tongyi-qianwen license\_link: >- URL language: * en pipeline\_tag: text-generation tags: * chat --- Qwen1.5-7B-Chat =============== Introduction ------------ Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of 'trust\_remote\_code'. For more details, please refer to our blog post and GitHub repo. Model Details ------------- Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention. Training details ---------------- We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. Requirements ------------ The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install 'transformers>=4.37.0', or you might encounter the following error: Quickstart ---------- Here provides a code snippet with 'apply\_chat\_template' to show you how to load the tokenizer and model and how to generate contents. For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely 'Qwen1.5-7B-Chat-GPTQ-Int4', 'Qwen1.5-7B-Chat-GPTQ-Int8', 'Qwen1.5-7B-Chat-AWQ', and 'Qwen1.5-7B-Chat-GGUF'. Tips ---- * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in 'generation\_config.json'. If you find our work helpful, feel free to give us a cite.
[]
[ "TAGS\n#gguf #region-us \n" ]
null
transformers
# DavidAU/Tanuki-7B-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from [`NeuralNovel/Tanuki-7B-v0.1`](https://huggingface.co/NeuralNovel/Tanuki-7B-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NeuralNovel/Tanuki-7B-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Tanuki-7B-v0.1-Q4_K_M-GGUF --model tanuki-7b-v0.1.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Tanuki-7B-v0.1-Q4_K_M-GGUF --model tanuki-7b-v0.1.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tanuki-7b-v0.1.Q4_K_M.gguf -n 128 ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["NeuralNovel/Neural-Story-v1", "NeuralNovel/Creative-Logic-v1"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "inference": false, "model-index": [{"name": "Tanuki-7B-v0.1", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 62.8, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 83.14, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 60.54, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 66.33}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 75.85, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 39.8, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tanuki-7B-v0.1", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/Tanuki-7B-v0.1-Q4_K_M-GGUF
null
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "dataset:NeuralNovel/Neural-Story-v1", "dataset:NeuralNovel/Creative-Logic-v1", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "model-index", "region:us" ]
null
2024-04-12T02:08:51+00:00
[]
[]
TAGS #transformers #gguf #llama-cpp #gguf-my-repo #dataset-NeuralNovel/Neural-Story-v1 #dataset-NeuralNovel/Creative-Logic-v1 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #model-index #region-us
# DavidAU/Tanuki-7B-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from 'NeuralNovel/Tanuki-7B-v0.1' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Tanuki-7B-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Tanuki-7B-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #dataset-NeuralNovel/Neural-Story-v1 #dataset-NeuralNovel/Creative-Logic-v1 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #model-index #region-us \n", "# DavidAU/Tanuki-7B-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Tanuki-7B-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/arcee-ai/mistral-v2-sec-dolphin <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/mistral-v2-sec-dolphin-GGUF/resolve/main/mistral-v2-sec-dolphin.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/mistral-v2-sec-dolphin-GGUF/resolve/main/mistral-v2-sec-dolphin.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/mistral-v2-sec-dolphin-GGUF/resolve/main/mistral-v2-sec-dolphin.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/mistral-v2-sec-dolphin-GGUF/resolve/main/mistral-v2-sec-dolphin.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/mistral-v2-sec-dolphin-GGUF/resolve/main/mistral-v2-sec-dolphin.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/mistral-v2-sec-dolphin-GGUF/resolve/main/mistral-v2-sec-dolphin.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/mistral-v2-sec-dolphin-GGUF/resolve/main/mistral-v2-sec-dolphin.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/mistral-v2-sec-dolphin-GGUF/resolve/main/mistral-v2-sec-dolphin.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/mistral-v2-sec-dolphin-GGUF/resolve/main/mistral-v2-sec-dolphin.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mistral-v2-sec-dolphin-GGUF/resolve/main/mistral-v2-sec-dolphin.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mistral-v2-sec-dolphin-GGUF/resolve/main/mistral-v2-sec-dolphin.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/mistral-v2-sec-dolphin-GGUF/resolve/main/mistral-v2-sec-dolphin.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/mistral-v2-sec-dolphin-GGUF/resolve/main/mistral-v2-sec-dolphin.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/mistral-v2-sec-dolphin-GGUF/resolve/main/mistral-v2-sec-dolphin.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "arcee-ai/mistral-v2-sec-dolphin", "quantized_by": "mradermacher"}
mradermacher/mistral-v2-sec-dolphin-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:arcee-ai/mistral-v2-sec-dolphin", "endpoints_compatible", "region:us" ]
null
2024-04-12T02:09:43+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mergekit #merge #en #base_model-arcee-ai/mistral-v2-sec-dolphin #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #mergekit #merge #en #base_model-arcee-ai/mistral-v2-sec-dolphin #endpoints_compatible #region-us \n" ]
null
transformers
# DavidAU/Panda-7B-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from [`NeuralNovel/Panda-7B-v0.1`](https://huggingface.co/NeuralNovel/Panda-7B-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NeuralNovel/Panda-7B-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Panda-7B-v0.1-Q4_K_M-GGUF --model panda-7b-v0.1.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Panda-7B-v0.1-Q4_K_M-GGUF --model panda-7b-v0.1.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m panda-7b-v0.1.Q4_K_M.gguf -n 128 ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["NeuralNovel/Creative-Logic-v1", "NeuralNovel/Neural-Story-v1"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "inference": false}
DavidAU/Panda-7B-v0.1-Q4_K_M-GGUF
null
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "dataset:NeuralNovel/Creative-Logic-v1", "dataset:NeuralNovel/Neural-Story-v1", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-12T02:10:15+00:00
[]
[]
TAGS #transformers #gguf #llama-cpp #gguf-my-repo #dataset-NeuralNovel/Creative-Logic-v1 #dataset-NeuralNovel/Neural-Story-v1 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
# DavidAU/Panda-7B-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from 'NeuralNovel/Panda-7B-v0.1' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Panda-7B-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Panda-7B-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #dataset-NeuralNovel/Creative-Logic-v1 #dataset-NeuralNovel/Neural-Story-v1 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "# DavidAU/Panda-7B-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Panda-7B-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
reinforcement-learning
sample-factory
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r MrPrjnce/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
{"library_name": "sample-factory", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "sample-factory"], "model-index": [{"name": "APPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "doom_health_gathering_supreme", "type": "doom_health_gathering_supreme"}, "metrics": [{"type": "mean_reward", "value": "9.98 +/- 3.86", "name": "mean_reward", "verified": false}]}]}]}
MrPrjnce/rl_course_vizdoom_health_gathering_supreme
null
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-12T02:10:24+00:00
[]
[]
TAGS #sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
A(n) APPO model trained on the doom_health_gathering_supreme environment. This model was trained using Sample-Factory 2.0: URL Documentation for how to use Sample-Factory can be found at URL ## Downloading the model After installing Sample-Factory, download the model with: ## Using the model To run the model after download, use the 'enjoy' script corresponding to this environment: You can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag. See URL for more details ## Training with this model To continue training with this model, use the 'train' script corresponding to this environment: Note, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at.
[ "## Downloading the model\n\nAfter installing Sample-Factory, download the model with:", "## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details", "## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at." ]
[ "TAGS\n#sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "## Downloading the model\n\nAfter installing Sample-Factory, download the model with:", "## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details", "## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Grayx/unstable_85
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T02:11:48+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA DreamBooth - fixiong/nekomaonn_lora These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of nekomaonn cat using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "diffusers", "lora", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "lora", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "base_model": "stabilityai/stable-diffusion-2-1-base", "inference": true, "instance_prompt": "a photo of nekomaonn cat"}
fixiong/nekomaonn_lora
null
[ "diffusers", "text-to-image", "lora", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "region:us" ]
null
2024-04-12T02:13:08+00:00
[]
[]
TAGS #diffusers #text-to-image #lora #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #region-us
# LoRA DreamBooth - fixiong/nekomaonn_lora These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of nekomaonn cat using DreamBooth. You can find some example images in the following. !img_0 !img_1 !img_2 !img_3 LoRA for the text encoder was enabled: False. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# LoRA DreamBooth - fixiong/nekomaonn_lora\n\nThese are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of nekomaonn cat using DreamBooth. You can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3\n\n\nLoRA for the text encoder was enabled: False.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #text-to-image #lora #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #region-us \n", "# LoRA DreamBooth - fixiong/nekomaonn_lora\n\nThese are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of nekomaonn cat using DreamBooth. You can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3\n\n\nLoRA for the text encoder was enabled: False.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
AshkanTaghipour/fused-ikea-feng
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-04-12T02:13:47+00:00
[ "1910.09700" ]
[]
TAGS #diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# DavidAU/StoryWeaver-7b-Instruct-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from [`kasper52786/StoryWeaver-7b-Instruct-v0.1`](https://huggingface.co/kasper52786/StoryWeaver-7b-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/kasper52786/StoryWeaver-7b-Instruct-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/StoryWeaver-7b-Instruct-v0.1-Q4_K_M-GGUF --model storyweaver-7b-instruct-v0.1.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/StoryWeaver-7b-Instruct-v0.1-Q4_K_M-GGUF --model storyweaver-7b-instruct-v0.1.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m storyweaver-7b-instruct-v0.1.Q4_K_M.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "llama-cpp", "gguf-my-repo"], "base_model": ["mistralai/Mistral-7B-Instruct-v0.2"]}
DavidAU/StoryWeaver-7b-Instruct-v0.1-Q4_K_M-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "llama-cpp", "gguf-my-repo", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-12T02:14:06+00:00
[]
[]
TAGS #gguf #merge #mergekit #lazymergekit #mistralai/Mistral-7B-Instruct-v0.2 #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
# DavidAU/StoryWeaver-7b-Instruct-v0.1-Q4_K_M-GGUF This model was converted to GGUF format from 'kasper52786/StoryWeaver-7b-Instruct-v0.1' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/StoryWeaver-7b-Instruct-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'kasper52786/StoryWeaver-7b-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #merge #mergekit #lazymergekit #mistralai/Mistral-7B-Instruct-v0.2 #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n", "# DavidAU/StoryWeaver-7b-Instruct-v0.1-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'kasper52786/StoryWeaver-7b-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/Mistral-7B-Neural-Story-mix-Q4_K_M-GGUF This model was converted to GGUF format from [`pcuenq/Mistral-7B-Neural-Story-mix`](https://huggingface.co/pcuenq/Mistral-7B-Neural-Story-mix) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/pcuenq/Mistral-7B-Neural-Story-mix) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Mistral-7B-Neural-Story-mix-Q4_K_M-GGUF --model mistral-7b-neural-story-mix.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Mistral-7B-Neural-Story-mix-Q4_K_M-GGUF --model mistral-7b-neural-story-mix.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-7b-neural-story-mix.Q4_K_M.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "inference": true}
DavidAU/Mistral-7B-Neural-Story-mix-Q4_K_M-GGUF
null
[ "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-12T02:15:39+00:00
[]
[]
TAGS #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
# DavidAU/Mistral-7B-Neural-Story-mix-Q4_K_M-GGUF This model was converted to GGUF format from 'pcuenq/Mistral-7B-Neural-Story-mix' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Mistral-7B-Neural-Story-mix-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'pcuenq/Mistral-7B-Neural-Story-mix' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "# DavidAU/Mistral-7B-Neural-Story-mix-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'pcuenq/Mistral-7B-Neural-Story-mix' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/Mistral-7B-storywriter-Q4_K_M-GGUF This model was converted to GGUF format from [`Norquinal/Mistral-7B-storywriter`](https://huggingface.co/Norquinal/Mistral-7B-storywriter) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Norquinal/Mistral-7B-storywriter) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Mistral-7B-storywriter-Q4_K_M-GGUF --model mistral-7b-storywriter.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Mistral-7B-storywriter-Q4_K_M-GGUF --model mistral-7b-storywriter.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-7b-storywriter.Q4_K_M.gguf -n 128 ```
{"license": "cc-by-nc-4.0", "tags": ["llama-cpp", "gguf-my-repo"]}
DavidAU/Mistral-7B-storywriter-Q4_K_M-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-12T02:17:00+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #region-us
# DavidAU/Mistral-7B-storywriter-Q4_K_M-GGUF This model was converted to GGUF format from 'Norquinal/Mistral-7B-storywriter' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Mistral-7B-storywriter-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'Norquinal/Mistral-7B-storywriter' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #region-us \n", "# DavidAU/Mistral-7B-storywriter-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'Norquinal/Mistral-7B-storywriter' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1870 - Overall Precision: 0.4821 - Overall Recall: 0.5760 - Overall F1: 0.5249 - Overall Accuracy: 0.9506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.2295 | 1.0 | 1358 | 0.1735 | 0.3341 | 0.3926 | 0.3610 | 0.9483 | | 0.1401 | 2.0 | 2716 | 0.1512 | 0.3905 | 0.5413 | 0.4537 | 0.9509 | | 0.0948 | 3.0 | 4074 | 0.1627 | 0.4667 | 0.5070 | 0.4860 | 0.9578 | | 0.0778 | 4.0 | 5432 | 0.1672 | 0.4831 | 0.5642 | 0.5205 | 0.9587 | | 0.0614 | 5.0 | 6790 | 0.1755 | 0.4967 | 0.5781 | 0.5344 | 0.9594 | ### Framework versions - Transformers 4.39.3 - Pytorch 1.12.1+cu113 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/biogpt", "model-index": [{"name": "results", "results": []}]}
ttlanhhh/results
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "token-classification", "generated_from_trainer", "base_model:microsoft/biogpt", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T02:17:53+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
results ======= This model is a fine-tuned version of microsoft/biogpt on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1870 * Overall Precision: 0.4821 * Overall Recall: 0.5760 * Overall F1: 0.5249 * Overall Accuracy: 0.9506 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 1.12.1+cu113 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 1.12.1+cu113\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 1.12.1+cu113\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Hermes-2-Pro-Mistral-7B - model: WizardLM/WizardMath-7B-V1.1 merge_method: slerp base_model: NousResearch/Hermes-2-Pro-Mistral-7B dtype: bfloat16 parameters: t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["NousResearch/Hermes-2-Pro-Mistral-7B", "WizardLM/WizardMath-7B-V1.1"]}
mergekit-community/mergekit-slerp-mczwvqr
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:WizardLM/WizardMath-7B-V1.1", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T02:29:53+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * NousResearch/Hermes-2-Pro-Mistral-7B * WizardLM/WizardMath-7B-V1.1 ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma-7b-sft-qlora-1 This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the chansung/no_robots_only_coding dataset. It achieves the following results on the evaluation set: - Loss: 2.1615 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 23.7344 | 0.91 | 5 | 7.9584 | | 14.6026 | 2.0 | 11 | 6.8289 | | 10.8118 | 2.91 | 16 | 6.4185 | | 10.8598 | 4.0 | 22 | 5.1061 | | 7.9354 | 4.91 | 27 | 1.7011 | | 2.0354 | 6.0 | 33 | 1.4461 | | 1.4855 | 6.91 | 38 | 1.3565 | | 1.326 | 8.0 | 44 | 1.2935 | | 1.1375 | 8.91 | 49 | 1.2696 | | 0.9091 | 10.0 | 55 | 1.2716 | | 0.8111 | 10.91 | 60 | 1.2861 | | 0.689 | 12.0 | 66 | 1.3148 | | 0.6341 | 12.91 | 71 | 1.3391 | | 0.5359 | 14.0 | 77 | 1.4232 | | 0.4664 | 14.91 | 82 | 1.5107 | | 0.3951 | 16.0 | 88 | 1.6597 | | 0.3593 | 16.91 | 93 | 1.9377 | | 0.2802 | 18.0 | 99 | 1.9024 | | 0.2613 | 18.91 | 104 | 2.0981 | | 0.2262 | 20.0 | 110 | 2.1472 | | 0.2169 | 20.91 | 115 | 2.1633 | | 0.2232 | 22.0 | 121 | 2.1595 | | 0.2096 | 22.73 | 125 | 2.1615 | ### Framework versions - PEFT 0.7.1 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "gemma", "library_name": "peft", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer"], "datasets": ["chansung/no_robots_only_coding"], "base_model": "google/gemma-7b", "model-index": [{"name": "gemma-7b-sft-qlora-1", "results": []}]}
chansung/gemma-7b-sft-qlora-1
null
[ "peft", "tensorboard", "safetensors", "gemma", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:chansung/no_robots_only_coding", "base_model:google/gemma-7b", "license:gemma", "4-bit", "region:us" ]
null
2024-04-12T02:31:16+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #gemma #alignment-handbook #trl #sft #generated_from_trainer #dataset-chansung/no_robots_only_coding #base_model-google/gemma-7b #license-gemma #4-bit #region-us
gemma-7b-sft-qlora-1 ==================== This model is a fine-tuned version of google/gemma-7b on the chansung/no\_robots\_only\_coding dataset. It achieves the following results on the evaluation set: * Loss: 2.1615 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 4 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 16 * total\_eval\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 25 ### Training results ### Framework versions * PEFT 0.7.1 * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 25", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #gemma #alignment-handbook #trl #sft #generated_from_trainer #dataset-chansung/no_robots_only_coding #base_model-google/gemma-7b #license-gemma #4-bit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 25", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
0x0uncle0/uncle08
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T02:33:08+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # controlnet-newguyme/chpt These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images below. prompt: red circle with blue background ![images_0)](./images_0.png) prompt: cyan circle with brown floral background ![images_1)](./images_1.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "controlnet", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "controlnet", "diffusers-training"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true}
newguyme/flir_controlnet_3epoch_chpt
null
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "diffusers-training", "base_model:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
null
2024-04-12T02:33:45+00:00
[]
[]
TAGS #diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #controlnet #diffusers-training #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #region-us
# controlnet-newguyme/chpt These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images below. prompt: red circle with blue background !images_0) prompt: cyan circle with brown floral background !images_1) ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# controlnet-newguyme/chpt\n\nThese are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.\nYou can find some example images below.\n\nprompt: red circle with blue background\n!images_0)\nprompt: cyan circle with brown floral background\n!images_1)", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #controlnet #diffusers-training #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #region-us \n", "# controlnet-newguyme/chpt\n\nThese are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.\nYou can find some example images below.\n\nprompt: red circle with blue background\n!images_0)\nprompt: cyan circle with brown floral background\n!images_1)", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/jukofyork/Eurus-70b-sft-fixed <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Eurus-70b-sft-fixed-GGUF/resolve/main/Eurus-70b-sft-fixed.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["reasoning"], "datasets": ["openbmb/UltraInteract_sft", "stingning/ultrachat", "openchat/openchat_sharegpt4_dataset", "Open-Orca/OpenOrca"], "base_model": "jukofyork/Eurus-70b-sft-fixed", "quantized_by": "mradermacher"}
mradermacher/Eurus-70b-sft-fixed-GGUF
null
[ "transformers", "gguf", "reasoning", "en", "dataset:openbmb/UltraInteract_sft", "dataset:stingning/ultrachat", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:Open-Orca/OpenOrca", "base_model:jukofyork/Eurus-70b-sft-fixed", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-12T02:34:22+00:00
[]
[ "en" ]
TAGS #transformers #gguf #reasoning #en #dataset-openbmb/UltraInteract_sft #dataset-stingning/ultrachat #dataset-openchat/openchat_sharegpt4_dataset #dataset-Open-Orca/OpenOrca #base_model-jukofyork/Eurus-70b-sft-fixed #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #reasoning #en #dataset-openbmb/UltraInteract_sft #dataset-stingning/ultrachat #dataset-openchat/openchat_sharegpt4_dataset #dataset-Open-Orca/OpenOrca #base_model-jukofyork/Eurus-70b-sft-fixed #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
null
# KMeans Clustering Model for Customer Segmentation ## Model Description This model is a KMeans clustering model trained to segment potential vehicle credit customers into different groups based on their responses to a survey. The survey captures various aspects of customer preferences and behaviors regarding vehicle acquisition and financing. ## Usage To use this model, load it with Python's pickle module and input the normalized feature data corresponding to the customer survey responses. The model will assign each new customer to one of the four clusters. ## Requirements - Python 3.6+ - scikit-learn - numpy - pandas ## Example ```python import pickle import numpy as np # Load the model with open('kmeans_cluster_model.pkl', 'rb') as file: model = pickle.load(file) # Example data (normalized) data = np.array([[0.1, 0.2, 0.1, -0.1, 0.0, 0.1, -0.2]]) cluster_label = model.predict(data) print(f'Assigned Cluster: {cluster_label[0]}') ``` ## Files - `kmeans_cluster_model.pkl`: The trained KMeans model. - `README.md`: This file. - `requirements.txt`: List of libraries required to run the model.
{}
EdissonMora96/clientes_credito_vehiculos
null
[ "region:us" ]
null
2024-04-12T02:35:44+00:00
[]
[]
TAGS #region-us
# KMeans Clustering Model for Customer Segmentation ## Model Description This model is a KMeans clustering model trained to segment potential vehicle credit customers into different groups based on their responses to a survey. The survey captures various aspects of customer preferences and behaviors regarding vehicle acquisition and financing. ## Usage To use this model, load it with Python's pickle module and input the normalized feature data corresponding to the customer survey responses. The model will assign each new customer to one of the four clusters. ## Requirements - Python 3.6+ - scikit-learn - numpy - pandas ## Example ## Files - 'kmeans_cluster_model.pkl': The trained KMeans model. - 'URL': This file. - 'URL': List of libraries required to run the model.
[ "# KMeans Clustering Model for Customer Segmentation", "## Model Description\nThis model is a KMeans clustering model trained to segment potential vehicle credit customers into different groups based on their responses to a survey. The survey captures various aspects of customer preferences and behaviors regarding vehicle acquisition and financing.", "## Usage\nTo use this model, load it with Python's pickle module and input the normalized feature data corresponding to the customer survey responses. The model will assign each new customer to one of the four clusters.", "## Requirements\n- Python 3.6+\n- scikit-learn\n- numpy\n- pandas", "## Example", "## Files\n- 'kmeans_cluster_model.pkl': The trained KMeans model.\n- 'URL': This file.\n- 'URL': List of libraries required to run the model." ]
[ "TAGS\n#region-us \n", "# KMeans Clustering Model for Customer Segmentation", "## Model Description\nThis model is a KMeans clustering model trained to segment potential vehicle credit customers into different groups based on their responses to a survey. The survey captures various aspects of customer preferences and behaviors regarding vehicle acquisition and financing.", "## Usage\nTo use this model, load it with Python's pickle module and input the normalized feature data corresponding to the customer survey responses. The model will assign each new customer to one of the four clusters.", "## Requirements\n- Python 3.6+\n- scikit-learn\n- numpy\n- pandas", "## Example", "## Files\n- 'kmeans_cluster_model.pkl': The trained KMeans model.\n- 'URL': This file.\n- 'URL': List of libraries required to run the model." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-finetuned-mental This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "model-index": [{"name": "mistral-finetuned-mental", "results": []}]}
freakfire5/mistral-finetuned-mental
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-04-12T02:41:49+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.1-GPTQ #license-apache-2.0 #region-us
# mistral-finetuned-mental This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# mistral-finetuned-mental\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 100\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.1-GPTQ #license-apache-2.0 #region-us \n", "# mistral-finetuned-mental\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 100\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
translation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # m2m100_418M-finetuned-luo-to-en This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 4.1964 - eval_bleu: 1.5595 - eval_runtime: 1470.1124 - eval_samples_per_second: 1.02 - eval_steps_per_second: 0.128 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.38.2 - Pytorch 2.0.1+cu117 - Datasets 2.17.1 - Tokenizers 0.15.2
{"license": "mit", "tags": ["translation", "generated_from_trainer"], "base_model": "facebook/m2m100_418M", "model-index": [{"name": "m2m100_418M-finetuned-luo-to-en", "results": []}]}
Pr123/m2m100_418M-finetuned-luo-to-en
null
[ "transformers", "tensorboard", "safetensors", "m2m_100", "text2text-generation", "translation", "generated_from_trainer", "base_model:facebook/m2m100_418M", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T02:41:50+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #m2m_100 #text2text-generation #translation #generated_from_trainer #base_model-facebook/m2m100_418M #license-mit #autotrain_compatible #endpoints_compatible #region-us
# m2m100_418M-finetuned-luo-to-en This model is a fine-tuned version of facebook/m2m100_418M on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 4.1964 - eval_bleu: 1.5595 - eval_runtime: 1470.1124 - eval_samples_per_second: 1.02 - eval_steps_per_second: 0.128 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.38.2 - Pytorch 2.0.1+cu117 - Datasets 2.17.1 - Tokenizers 0.15.2
[ "# m2m100_418M-finetuned-luo-to-en\n\nThis model is a fine-tuned version of facebook/m2m100_418M on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 4.1964\n- eval_bleu: 1.5595\n- eval_runtime: 1470.1124\n- eval_samples_per_second: 1.02\n- eval_steps_per_second: 0.128\n- step: 0", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 12\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.0.1+cu117\n- Datasets 2.17.1\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #m2m_100 #text2text-generation #translation #generated_from_trainer #base_model-facebook/m2m100_418M #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# m2m100_418M-finetuned-luo-to-en\n\nThis model is a fine-tuned version of facebook/m2m100_418M on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 4.1964\n- eval_bleu: 1.5595\n- eval_runtime: 1470.1124\n- eval_samples_per_second: 1.02\n- eval_steps_per_second: 0.128\n- step: 0", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 12\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.0.1+cu117\n- Datasets 2.17.1\n- Tokenizers 0.15.2" ]
null
null
# DavidAU/MPT-7b-WizardLM_Uncensored-Storywriter-Merge-Q4_K_M-GGUF This model was converted to GGUF format from [`TehVenom/MPT-7b-WizardLM_Uncensored-Storywriter-Merge`](https://huggingface.co/TehVenom/MPT-7b-WizardLM_Uncensored-Storywriter-Merge) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/TehVenom/MPT-7b-WizardLM_Uncensored-Storywriter-Merge) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/MPT-7b-WizardLM_Uncensored-Storywriter-Merge-Q4_K_M-GGUF --model mpt-7b-wizardlm_uncensored-storywriter-merge.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/MPT-7b-WizardLM_Uncensored-Storywriter-Merge-Q4_K_M-GGUF --model mpt-7b-wizardlm_uncensored-storywriter-merge.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mpt-7b-wizardlm_uncensored-storywriter-merge.Q4_K_M.gguf -n 128 ```
{"tags": ["llama-cpp", "gguf-my-repo"]}
DavidAU/MPT-7b-WizardLM_Uncensored-Storywriter-Merge-Q4_K_M-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "region:us" ]
null
2024-04-12T02:43:06+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #region-us
# DavidAU/MPT-7b-WizardLM_Uncensored-Storywriter-Merge-Q4_K_M-GGUF This model was converted to GGUF format from 'TehVenom/MPT-7b-WizardLM_Uncensored-Storywriter-Merge' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/MPT-7b-WizardLM_Uncensored-Storywriter-Merge-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'TehVenom/MPT-7b-WizardLM_Uncensored-Storywriter-Merge' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n", "# DavidAU/MPT-7b-WizardLM_Uncensored-Storywriter-Merge-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'TehVenom/MPT-7b-WizardLM_Uncensored-Storywriter-Merge' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # saffin/vit_ivi_first_test This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3207 - Train Accuracy: 1.0 - Validation Loss: 0.2912 - Validation Accuracy: 1.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1020, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 1.3768 | 0.7794 | 1.0830 | 1.0 | 0 | | 0.8993 | 0.9951 | 0.7265 | 1.0 | 1 | | 0.5984 | 0.9951 | 0.4975 | 1.0 | 2 | | 0.4212 | 0.9951 | 0.3697 | 1.0 | 3 | | 0.3207 | 1.0 | 0.2912 | 1.0 | 4 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.8.0 - Datasets 2.18.0 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "saffin/vit_ivi_first_test", "results": []}]}
saffin/vit_ivi_first_test
null
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T02:44:08+00:00
[]
[]
TAGS #transformers #tf #vit #image-classification #generated_from_keras_callback #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
saffin/vit\_ivi\_first\_test ============================ This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.3207 * Train Accuracy: 1.0 * Validation Loss: 0.2912 * Validation Accuracy: 1.0 * Epoch: 4 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 3e-05, 'decay\_steps': 1020, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.01} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.31.0 * TensorFlow 2.8.0 * Datasets 2.18.0 * Tokenizers 0.13.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 3e-05, 'decay\\_steps': 1020, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.31.0\n* TensorFlow 2.8.0\n* Datasets 2.18.0\n* Tokenizers 0.13.3" ]
[ "TAGS\n#transformers #tf #vit #image-classification #generated_from_keras_callback #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 3e-05, 'decay\\_steps': 1020, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.31.0\n* TensorFlow 2.8.0\n* Datasets 2.18.0\n* Tokenizers 0.13.3" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Grayx/unstable_86
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T02:44:33+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# DavidAU/StoryTeller-10b-2e-v2-Q4_K_M-GGUF This model was converted to GGUF format from [`ImagineIt/StoryTeller-10b-2e-v2`](https://huggingface.co/ImagineIt/StoryTeller-10b-2e-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ImagineIt/StoryTeller-10b-2e-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/StoryTeller-10b-2e-v2-Q4_K_M-GGUF --model storyteller-10b-2e-v2.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/StoryTeller-10b-2e-v2-Q4_K_M-GGUF --model storyteller-10b-2e-v2.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m storyteller-10b-2e-v2.Q4_K_M.gguf -n 128 ```
{"tags": ["llama-cpp", "gguf-my-repo"]}
DavidAU/StoryTeller-10b-2e-v2-Q4_K_M-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "region:us" ]
null
2024-04-12T02:45:13+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #region-us
# DavidAU/StoryTeller-10b-2e-v2-Q4_K_M-GGUF This model was converted to GGUF format from 'ImagineIt/StoryTeller-10b-2e-v2' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/StoryTeller-10b-2e-v2-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'ImagineIt/StoryTeller-10b-2e-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n", "# DavidAU/StoryTeller-10b-2e-v2-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'ImagineIt/StoryTeller-10b-2e-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # checkpoints-mistral-300M This model is a fine-tuned version of [None](https://huggingface.co/None) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.205 ## Model description More information needed ## Training and evaluation data ***** train metrics ***** epoch = 13.91 train_loss = 2.205 ***** eval metrics ***** epoch = 13.91 eval_loss = 2.4 perplexity = 11.0228 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 16 - total_train_batch_size: 192 - total_eval_batch_size: 12 - optimizer: Adam with betas=(0.9,0.95) and epsilon=0.0001 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 4 - num_epochs: 6 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.5 - Tokenizers 0.14.1 ## Usage ```python from transformers import pipeline pipe = pipeline("text-generation", model="ayousanz/japanese-mistral-0.3b-base") from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer import torch MODEL_NAME = "ayousanz/japanese-mistral-0.3b-base" torch.set_float32_matmul_precision('high') DEVICE = "cuda" if torch.cuda.is_available(): print("cuda") DEVICE = "cuda" else: print("cpu") DEVICE = "cpu" tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME,use_fast=False) model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, trust_remote_code=True, ).to(DEVICE) prompt = "大規模言語モデルとは、" inputs = tokenizer(prompt, add_special_tokens=False,return_tensors="pt").to(model.device) with torch.no_grad(): outputs = model.generate( inputs["input_ids"], max_new_tokens=256, do_sample=True, early_stopping=False, top_p=0.95, top_k=50, temperature=0.9, no_repeat_ngram_size=2, num_beams=3 ) outputs_txt = tokenizer.decode(outputs[0]) print(outputs_txt) ```
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "None", "model-index": [{"name": "checkpoints-mistral-0.3b", "results": []}]}
ayousanz/japanese-mistral-0.3b-base
null
[ "transformers", "safetensors", "mistral", "text-generation", "generated_from_trainer", "base_model:None", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T02:45:59+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #generated_from_trainer #base_model-None #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# checkpoints-mistral-300M This model is a fine-tuned version of None on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.205 ## Model description More information needed ## Training and evaluation data * train metrics * epoch = 13.91 train_loss = 2.205 * eval metrics * epoch = 13.91 eval_loss = 2.4 perplexity = 11.0228 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 16 - total_train_batch_size: 192 - total_eval_batch_size: 12 - optimizer: Adam with betas=(0.9,0.95) and epsilon=0.0001 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 4 - num_epochs: 6 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.5 - Tokenizers 0.14.1 ## Usage
[ "# checkpoints-mistral-300M\n\nThis model is a fine-tuned version of None on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.205", "## Model description\n\nMore information needed", "## Training and evaluation data\n\n* train metrics *\n\n epoch = 13.91\n train_loss = 2.205\n\n* eval metrics *\n\n epoch = 13.91\n eval_loss = 2.4\n perplexity = 11.0228", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 6\n- eval_batch_size: 6\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 192\n- total_eval_batch_size: 12\n- optimizer: Adam with betas=(0.9,0.95) and epsilon=0.0001\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 4\n- num_epochs: 6\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.5\n- Tokenizers 0.14.1", "## Usage" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #generated_from_trainer #base_model-None #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# checkpoints-mistral-300M\n\nThis model is a fine-tuned version of None on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.205", "## Model description\n\nMore information needed", "## Training and evaluation data\n\n* train metrics *\n\n epoch = 13.91\n train_loss = 2.205\n\n* eval metrics *\n\n epoch = 13.91\n eval_loss = 2.4\n perplexity = 11.0228", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 6\n- eval_batch_size: 6\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 192\n- total_eval_batch_size: 12\n- optimizer: Adam with betas=(0.9,0.95) and epsilon=0.0001\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 4\n- num_epochs: 6\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.5\n- Tokenizers 0.14.1", "## Usage" ]
text-generation
transformers
# DavidAU/SOLARC-M-10.7B-Q4_K_M-GGUF This model was converted to GGUF format from [`DopeorNope/SOLARC-M-10.7B`](https://huggingface.co/DopeorNope/SOLARC-M-10.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DopeorNope/SOLARC-M-10.7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/SOLARC-M-10.7B-Q4_K_M-GGUF --model solarc-m-10.7b.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/SOLARC-M-10.7B-Q4_K_M-GGUF --model solarc-m-10.7b.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m solarc-m-10.7b.Q4_K_M.gguf -n 128 ```
{"language": ["ko"], "license": "cc-by-nc-sa-4.0", "library_name": "transformers", "tags": ["merge", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation"}
DavidAU/SOLARC-M-10.7B-Q4_K_M-GGUF
null
[ "transformers", "gguf", "merge", "llama-cpp", "gguf-my-repo", "text-generation", "ko", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-12T02:47:39+00:00
[]
[ "ko" ]
TAGS #transformers #gguf #merge #llama-cpp #gguf-my-repo #text-generation #ko #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us
# DavidAU/SOLARC-M-10.7B-Q4_K_M-GGUF This model was converted to GGUF format from 'DopeorNope/SOLARC-M-10.7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/SOLARC-M-10.7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'DopeorNope/SOLARC-M-10.7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #merge #llama-cpp #gguf-my-repo #text-generation #ko #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us \n", "# DavidAU/SOLARC-M-10.7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'DopeorNope/SOLARC-M-10.7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/Nous-Capybara-3B-V1.9-Q8_0-GGUF This model was converted to GGUF format from [`NousResearch/Nous-Capybara-3B-V1.9`](https://huggingface.co/NousResearch/Nous-Capybara-3B-V1.9) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NousResearch/Nous-Capybara-3B-V1.9) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Nous-Capybara-3B-V1.9-Q8_0-GGUF --model nous-capybara-3b-v1.9.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Nous-Capybara-3B-V1.9-Q8_0-GGUF --model nous-capybara-3b-v1.9.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m nous-capybara-3b-v1.9.Q8_0.gguf -n 128 ```
{"language": ["eng"], "license": ["mit"], "tags": ["sft", "StableLM", "llama-cpp", "gguf-my-repo"], "datasets": ["LDJnr/Capybara", "LDJnr/LessWrong-Amplify-Instruct", "LDJnr/Pure-Dove", "LDJnr/Verified-Camel"]}
DavidAU/Nous-Capybara-3B-V1.9-Q8_0-GGUF
null
[ "gguf", "sft", "StableLM", "llama-cpp", "gguf-my-repo", "eng", "dataset:LDJnr/Capybara", "dataset:LDJnr/LessWrong-Amplify-Instruct", "dataset:LDJnr/Pure-Dove", "dataset:LDJnr/Verified-Camel", "license:mit", "region:us" ]
null
2024-04-12T02:48:29+00:00
[]
[ "eng" ]
TAGS #gguf #sft #StableLM #llama-cpp #gguf-my-repo #eng #dataset-LDJnr/Capybara #dataset-LDJnr/LessWrong-Amplify-Instruct #dataset-LDJnr/Pure-Dove #dataset-LDJnr/Verified-Camel #license-mit #region-us
# DavidAU/Nous-Capybara-3B-V1.9-Q8_0-GGUF This model was converted to GGUF format from 'NousResearch/Nous-Capybara-3B-V1.9' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Nous-Capybara-3B-V1.9-Q8_0-GGUF\nThis model was converted to GGUF format from 'NousResearch/Nous-Capybara-3B-V1.9' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #sft #StableLM #llama-cpp #gguf-my-repo #eng #dataset-LDJnr/Capybara #dataset-LDJnr/LessWrong-Amplify-Instruct #dataset-LDJnr/Pure-Dove #dataset-LDJnr/Verified-Camel #license-mit #region-us \n", "# DavidAU/Nous-Capybara-3B-V1.9-Q8_0-GGUF\nThis model was converted to GGUF format from 'NousResearch/Nous-Capybara-3B-V1.9' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-to-image
diffusers
# Nostalgic Life A mix of Clumsy_Trainer's Cool Kids v1 and v2, with DucHaiten's GoldenLife compositions and OneRing's Esthetic Retro Anime's style to fill us with nostalgia! Samples and prompt by digiplay: ![digiplay samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/VoYs-ZdcBBu7AGIYKDObS.png) (Click for larger) A goddess of sea,Teal long dress,Compassion,dophin and flyfish,penguin,ice,ice rock, More samples and prompts: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/AMPZYkzhOxvY738XXND0R.png) (Click for larger) Top left: Anime cute little girl, bangs, depth of field, embedded, hair ribbon, long hair, looking at viewer, neck ribbon, non-web source, palm leaf, palm tree, purple eyes, purple hair, red ribbon, ribbon, self upload, solo Top right: highquality, masterpiece, 1girl, Chi-Chi, close up, :D, smile, arms up, pink helmet, black hair, black eyes, blush, white teeth, bikini armor, aqua cape, pink gloves, pink boots, cleavage. cave, rock, mountain. blue collar Bottom left: videogames, little pepperoni pizza winner, robert jordan, josephine wall, roll20, hidari illumination, radiant light, sitting elementary girl, Pretty CUTE, gorgeous hair, DETAILED CHIBI EYES, Magazine ad, iconic, 1943, Cartoon, sharp focus, comic, watched towel. 4k art on canvas by kyoani and ROSSDRAWS Bottom right: Cartoon Pretty CUTE LITTLE Girl, sitting on a box of rockets, CHIBI EYES, holding coke rocket, gorgeous detailed hair, Ponytail, Magazine ad, iconic, 1940, sharp focus. Illustration By KlaysMoji and artgerm and Clay Mann and and leyendecker and Dave Rapoza Original pages: https://civitai.com/models/137781?modelVersionId=152147 (Esthetic Retro Anime) https://huggingface.co/Yntec/KIDSILLUSTRATIONS KIDS ILLUSTRATIONS V2 - https://civitai.com/models/60724?modelVersionId=67980 DucHaiten-GoldenLife - https://tensor.art/models/628276277415133426 https://huggingface.co/Yntec/GoodLife https://huggingface.co/Yntec/Nostalgic # Recipes: - SuperMerger Weight Sum MBW 0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0 Model A: Nostalgic Model B: GoodLife Output: NostalgicLife Bake MoistMixV2VAE: Output: NostalgicLifeVAE
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["Anime", "90s", "Retro", "Vintage", "Children Book", "Kids Illustrations", "Children", "Book Illustrations", "3D", "Pixar", "Clumsy_Trainer", "OneRing", "DucHaiten", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers"], "pipeline_tag": "text-to-image"}
Yntec/NostalgicLife
null
[ "diffusers", "safetensors", "Anime", "90s", "Retro", "Vintage", "Children Book", "Kids Illustrations", "Children", "Book Illustrations", "3D", "Pixar", "Clumsy_Trainer", "OneRing", "DucHaiten", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-12T02:48:39+00:00
[]
[]
TAGS #diffusers #safetensors #Anime #90s #Retro #Vintage #Children Book #Kids Illustrations #Children #Book Illustrations #3D #Pixar #Clumsy_Trainer #OneRing #DucHaiten #stable-diffusion #stable-diffusion-diffusers #text-to-image #license-creativeml-openrail-m #endpoints_compatible #has_space #diffusers-StableDiffusionPipeline #region-us
# Nostalgic Life A mix of Clumsy_Trainer's Cool Kids v1 and v2, with DucHaiten's GoldenLife compositions and OneRing's Esthetic Retro Anime's style to fill us with nostalgia! Samples and prompt by digiplay: !digiplay samples (Click for larger) A goddess of sea,Teal long dress,Compassion,dophin and flyfish,penguin,ice,ice rock, More samples and prompts: !image/png (Click for larger) Top left: Anime cute little girl, bangs, depth of field, embedded, hair ribbon, long hair, looking at viewer, neck ribbon, non-web source, palm leaf, palm tree, purple eyes, purple hair, red ribbon, ribbon, self upload, solo Top right: highquality, masterpiece, 1girl, Chi-Chi, close up, :D, smile, arms up, pink helmet, black hair, black eyes, blush, white teeth, bikini armor, aqua cape, pink gloves, pink boots, cleavage. cave, rock, mountain. blue collar Bottom left: videogames, little pepperoni pizza winner, robert jordan, josephine wall, roll20, hidari illumination, radiant light, sitting elementary girl, Pretty CUTE, gorgeous hair, DETAILED CHIBI EYES, Magazine ad, iconic, 1943, Cartoon, sharp focus, comic, watched towel. 4k art on canvas by kyoani and ROSSDRAWS Bottom right: Cartoon Pretty CUTE LITTLE Girl, sitting on a box of rockets, CHIBI EYES, holding coke rocket, gorgeous detailed hair, Ponytail, Magazine ad, iconic, 1940, sharp focus. Illustration By KlaysMoji and artgerm and Clay Mann and and leyendecker and Dave Rapoza Original pages: URL (Esthetic Retro Anime) URL KIDS ILLUSTRATIONS V2 - URL DucHaiten-GoldenLife - URL URL URL # Recipes: - SuperMerger Weight Sum MBW 0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0 Model A: Nostalgic Model B: GoodLife Output: NostalgicLife Bake MoistMixV2VAE: Output: NostalgicLifeVAE
[ "# Nostalgic Life\n\nA mix of Clumsy_Trainer's Cool Kids v1 and v2, with DucHaiten's GoldenLife compositions and OneRing's Esthetic Retro Anime's style to fill us with nostalgia!\n\nSamples and prompt by digiplay:\n\n!digiplay samples\n\n(Click for larger)\n\nA goddess of sea,Teal long dress,Compassion,dophin and flyfish,penguin,ice,ice rock,\n\nMore samples and prompts:\n\n!image/png\n\n(Click for larger)\n\nTop left: Anime cute little girl, bangs, depth of field, embedded, hair ribbon, long hair, looking at viewer, neck ribbon, non-web source, palm leaf, palm tree, purple eyes, purple hair, red ribbon, ribbon, self upload, solo\n\nTop right: highquality, masterpiece, 1girl, Chi-Chi, close up, :D, smile, arms up, pink helmet, black hair, black eyes, blush, white teeth, bikini armor, aqua cape, pink gloves, pink boots, cleavage. cave, rock, mountain. blue collar\n\nBottom left: videogames, little pepperoni pizza winner, robert jordan, josephine wall, roll20, hidari illumination, radiant light, sitting elementary girl, Pretty CUTE, gorgeous hair, DETAILED CHIBI EYES, Magazine ad, iconic, 1943, Cartoon, sharp focus, comic, watched towel. 4k art on canvas by kyoani and ROSSDRAWS\n\nBottom right: Cartoon Pretty CUTE LITTLE Girl, sitting on a box of rockets, CHIBI EYES, holding coke rocket, gorgeous detailed hair, Ponytail, Magazine ad, iconic, 1940, sharp focus. Illustration By KlaysMoji and artgerm and Clay Mann and and leyendecker and Dave Rapoza\n\nOriginal pages:\n\nURL (Esthetic Retro Anime)\n\nURL\n\nKIDS ILLUSTRATIONS V2 - URL\n\nDucHaiten-GoldenLife - URL\n\nURL\n\nURL", "# Recipes:\n\n- SuperMerger Weight Sum MBW 0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0\n\nModel A:\n\nNostalgic\n\nModel B:\n\nGoodLife\n\nOutput:\n\nNostalgicLife\n\nBake MoistMixV2VAE:\n\nOutput:\n\nNostalgicLifeVAE" ]
[ "TAGS\n#diffusers #safetensors #Anime #90s #Retro #Vintage #Children Book #Kids Illustrations #Children #Book Illustrations #3D #Pixar #Clumsy_Trainer #OneRing #DucHaiten #stable-diffusion #stable-diffusion-diffusers #text-to-image #license-creativeml-openrail-m #endpoints_compatible #has_space #diffusers-StableDiffusionPipeline #region-us \n", "# Nostalgic Life\n\nA mix of Clumsy_Trainer's Cool Kids v1 and v2, with DucHaiten's GoldenLife compositions and OneRing's Esthetic Retro Anime's style to fill us with nostalgia!\n\nSamples and prompt by digiplay:\n\n!digiplay samples\n\n(Click for larger)\n\nA goddess of sea,Teal long dress,Compassion,dophin and flyfish,penguin,ice,ice rock,\n\nMore samples and prompts:\n\n!image/png\n\n(Click for larger)\n\nTop left: Anime cute little girl, bangs, depth of field, embedded, hair ribbon, long hair, looking at viewer, neck ribbon, non-web source, palm leaf, palm tree, purple eyes, purple hair, red ribbon, ribbon, self upload, solo\n\nTop right: highquality, masterpiece, 1girl, Chi-Chi, close up, :D, smile, arms up, pink helmet, black hair, black eyes, blush, white teeth, bikini armor, aqua cape, pink gloves, pink boots, cleavage. cave, rock, mountain. blue collar\n\nBottom left: videogames, little pepperoni pizza winner, robert jordan, josephine wall, roll20, hidari illumination, radiant light, sitting elementary girl, Pretty CUTE, gorgeous hair, DETAILED CHIBI EYES, Magazine ad, iconic, 1943, Cartoon, sharp focus, comic, watched towel. 4k art on canvas by kyoani and ROSSDRAWS\n\nBottom right: Cartoon Pretty CUTE LITTLE Girl, sitting on a box of rockets, CHIBI EYES, holding coke rocket, gorgeous detailed hair, Ponytail, Magazine ad, iconic, 1940, sharp focus. Illustration By KlaysMoji and artgerm and Clay Mann and and leyendecker and Dave Rapoza\n\nOriginal pages:\n\nURL (Esthetic Retro Anime)\n\nURL\n\nKIDS ILLUSTRATIONS V2 - URL\n\nDucHaiten-GoldenLife - URL\n\nURL\n\nURL", "# Recipes:\n\n- SuperMerger Weight Sum MBW 0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0\n\nModel A:\n\nNostalgic\n\nModel B:\n\nGoodLife\n\nOutput:\n\nNostalgicLife\n\nBake MoistMixV2VAE:\n\nOutput:\n\nNostalgicLifeVAE" ]
reinforcement-learning
stable-baselines3
# **A2C** Agent playing **PandaPickAndPlace-v3** This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python %%capture !apt install python-opengl !apt install ffmpeg !apt install xvfb !pip3 install pyvirtualdisplay from pyvirtualdisplay import Display virtual_display = Display(visible=0, size=(1400, 900)) virtual_display.start() !pip install stable-baselines3[extra] !pip install gymnasium !pip install huggingface_sb3 !pip install huggingface_hub !pip install panda_gym import os import gymnasium as gym import panda_gym from stable_baselines3 import A2C from stable_baselines3.common.evaluation import evaluate_policy from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize from stable_baselines3.common.env_util import make_vec_env env_id = "PandaPickAndPlace-v3" env = gym.make(env_id) env = make_vec_env(env_id, n_envs=4) env = VecNormalize(env, clip_obs = 10) model = A2C("MultiInputPolicy", env, verbose=1) model.learn(1_000_000) model.save("a2c-PandaPickAndPlace-v3") env.save("vec_normalize.pkl") from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize # Load the saved statistics eval_env = DummyVecEnv([lambda: gym.make("PandaPickAndPlace-v3")]) eval_env = VecNormalize.load("vec_normalize.pkl", eval_env) # We need to override the render_mode eval_env.render_mode = "rgb_array" # do not update them at test time eval_env.training = False # reward normalization is not needed at test time eval_env.norm_reward = False # Load the agent model = A2C.load("a2c-PandaPickAndPlace-v3") mean_reward, std_reward = evaluate_policy(model, eval_env) print(f"Mean reward = {mean_reward:.2f} +/- {std_reward:.2f}") ... ```
{"library_name": "stable-baselines3", "tags": ["PandaPickAndPlace-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaPickAndPlace-v3", "type": "PandaPickAndPlace-v3"}, "metrics": [{"type": "mean_reward", "value": "-50.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
Megalino111/a2c-PandaPickAndPlace-v3
null
[ "stable-baselines3", "PandaPickAndPlace-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-12T02:49:15+00:00
[]
[]
TAGS #stable-baselines3 #PandaPickAndPlace-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# A2C Agent playing PandaPickAndPlace-v3 This is a trained model of a A2C agent playing PandaPickAndPlace-v3 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# A2C Agent playing PandaPickAndPlace-v3\nThis is a trained model of a A2C agent playing PandaPickAndPlace-v3\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #PandaPickAndPlace-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# A2C Agent playing PandaPickAndPlace-v3\nThis is a trained model of a A2C agent playing PandaPickAndPlace-v3\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
null
null
# DavidAU/Rosa_v1_3B-Q6_K-GGUF This model was converted to GGUF format from [`jeiku/Rosa_v1_3B`](https://huggingface.co/jeiku/Rosa_v1_3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jeiku/Rosa_v1_3B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Rosa_v1_3B-Q6_K-GGUF --model rosa_v1_3b.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Rosa_v1_3B-Q6_K-GGUF --model rosa_v1_3b.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m rosa_v1_3b.Q6_K.gguf -n 128 ```
{"language": ["en"], "license": "other", "tags": ["llama-cpp", "gguf-my-repo"]}
DavidAU/Rosa_v1_3B-Q6_K-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "license:other", "region:us" ]
null
2024-04-12T02:49:43+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #license-other #region-us
# DavidAU/Rosa_v1_3B-Q6_K-GGUF This model was converted to GGUF format from 'jeiku/Rosa_v1_3B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Rosa_v1_3B-Q6_K-GGUF\nThis model was converted to GGUF format from 'jeiku/Rosa_v1_3B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #license-other #region-us \n", "# DavidAU/Rosa_v1_3B-Q6_K-GGUF\nThis model was converted to GGUF format from 'jeiku/Rosa_v1_3B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/Mini-Mixtral-v0.2-Q8_0-GGUF This model was converted to GGUF format from [`NeuralNovel/Mini-Mixtral-v0.2`](https://huggingface.co/NeuralNovel/Mini-Mixtral-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NeuralNovel/Mini-Mixtral-v0.2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Mini-Mixtral-v0.2-Q8_0-GGUF --model mini-mixtral-v0.2.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Mini-Mixtral-v0.2-Q8_0-GGUF --model mini-mixtral-v0.2.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mini-mixtral-v0.2.Q8_0.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "unsloth/mistral-7b-v0.2", "mistralai/Mistral-7B-Instruct-v0.2", "llama-cpp", "gguf-my-repo"], "base_model": ["unsloth/mistral-7b-v0.2", "mistralai/Mistral-7B-Instruct-v0.2"]}
DavidAU/Mini-Mixtral-v0.2-Q8_0-GGUF
null
[ "gguf", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "unsloth/mistral-7b-v0.2", "mistralai/Mistral-7B-Instruct-v0.2", "llama-cpp", "gguf-my-repo", "base_model:unsloth/mistral-7b-v0.2", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-12T02:52:12+00:00
[]
[]
TAGS #gguf #moe #frankenmoe #merge #mergekit #lazymergekit #unsloth/mistral-7b-v0.2 #mistralai/Mistral-7B-Instruct-v0.2 #llama-cpp #gguf-my-repo #base_model-unsloth/mistral-7b-v0.2 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
# DavidAU/Mini-Mixtral-v0.2-Q8_0-GGUF This model was converted to GGUF format from 'NeuralNovel/Mini-Mixtral-v0.2' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Mini-Mixtral-v0.2-Q8_0-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Mini-Mixtral-v0.2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #moe #frankenmoe #merge #mergekit #lazymergekit #unsloth/mistral-7b-v0.2 #mistralai/Mistral-7B-Instruct-v0.2 #llama-cpp #gguf-my-repo #base_model-unsloth/mistral-7b-v0.2 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "# DavidAU/Mini-Mixtral-v0.2-Q8_0-GGUF\nThis model was converted to GGUF format from 'NeuralNovel/Mini-Mixtral-v0.2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
# Uploaded model - **Developed by:** chatty123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "dpo"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
chatty123/mistral_rank8_dpo
null
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "dpo", "conversational", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T02:52:46+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #dpo #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: chatty123 - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: chatty123\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #dpo #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: chatty123\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
null
# DavidAU/Rosa_Writing_3B-Q8_0-GGUF This model was converted to GGUF format from [`jeiku/Rosa_Writing_3B`](https://huggingface.co/jeiku/Rosa_Writing_3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jeiku/Rosa_Writing_3B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Rosa_Writing_3B-Q8_0-GGUF --model rosa_writing_3b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Rosa_Writing_3B-Q8_0-GGUF --model rosa_writing_3b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m rosa_writing_3b.Q8_0.gguf -n 128 ```
{"tags": ["llama-cpp", "gguf-my-repo"]}
DavidAU/Rosa_Writing_3B-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "region:us" ]
null
2024-04-12T02:55:07+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #region-us
# DavidAU/Rosa_Writing_3B-Q8_0-GGUF This model was converted to GGUF format from 'jeiku/Rosa_Writing_3B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Rosa_Writing_3B-Q8_0-GGUF\nThis model was converted to GGUF format from 'jeiku/Rosa_Writing_3B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n", "# DavidAU/Rosa_Writing_3B-Q8_0-GGUF\nThis model was converted to GGUF format from 'jeiku/Rosa_Writing_3B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HeydarS/llama2_witQA_peft_v43
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-12T02:55:43+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# DavidAU/Narrative_Nexus_DPO_7B-Q4_K_M-GGUF This model was converted to GGUF format from [`jeiku/Narrative_Nexus_DPO_7B`](https://huggingface.co/jeiku/Narrative_Nexus_DPO_7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jeiku/Narrative_Nexus_DPO_7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Narrative_Nexus_DPO_7B-Q4_K_M-GGUF --model narrative_nexus_dpo_7b.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Narrative_Nexus_DPO_7B-Q4_K_M-GGUF --model narrative_nexus_dpo_7b.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m narrative_nexus_dpo_7b.Q4_K_M.gguf -n 128 ```
{"library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"]}
DavidAU/Narrative_Nexus_DPO_7B-Q4_K_M-GGUF
null
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "endpoints_compatible", "region:us" ]
null
2024-04-12T02:56:22+00:00
[]
[]
TAGS #transformers #gguf #llama-cpp #gguf-my-repo #endpoints_compatible #region-us
# DavidAU/Narrative_Nexus_DPO_7B-Q4_K_M-GGUF This model was converted to GGUF format from 'jeiku/Narrative_Nexus_DPO_7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Narrative_Nexus_DPO_7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'jeiku/Narrative_Nexus_DPO_7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #endpoints_compatible #region-us \n", "# DavidAU/Narrative_Nexus_DPO_7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'jeiku/Narrative_Nexus_DPO_7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# DavidAU/Mewthree_7B-Q4_K_M-GGUF This model was converted to GGUF format from [`jeiku/Mewthree_7B`](https://huggingface.co/jeiku/Mewthree_7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jeiku/Mewthree_7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Mewthree_7B-Q4_K_M-GGUF --model mewthree_7b.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Mewthree_7B-Q4_K_M-GGUF --model mewthree_7b.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mewthree_7b.Q4_K_M.gguf -n 128 ```
{"license": "other", "library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["Krisbiantoro/mistral7b_dpo_en"]}
DavidAU/Mewthree_7B-Q4_K_M-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:Krisbiantoro/mistral7b_dpo_en", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-12T02:57:38+00:00
[]
[]
TAGS #transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-Krisbiantoro/mistral7b_dpo_en #license-other #endpoints_compatible #region-us
# DavidAU/Mewthree_7B-Q4_K_M-GGUF This model was converted to GGUF format from 'jeiku/Mewthree_7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Mewthree_7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'jeiku/Mewthree_7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-Krisbiantoro/mistral7b_dpo_en #license-other #endpoints_compatible #region-us \n", "# DavidAU/Mewthree_7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'jeiku/Mewthree_7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_mind_model This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset. It achieves the following results on the evaluation set: - Loss: 2.6370 - Accuracy: 0.1150 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.8 | 3 | 2.6441 | 0.0796 | | No log | 1.87 | 7 | 2.6416 | 0.0973 | | 2.6329 | 2.93 | 11 | 2.6409 | 0.1062 | | 2.6329 | 4.0 | 15 | 2.6389 | 0.1150 | | 2.6329 | 4.8 | 18 | 2.6389 | 0.1150 | | 2.6276 | 5.87 | 22 | 2.6381 | 0.1150 | | 2.6276 | 6.93 | 26 | 2.6372 | 0.1150 | | 2.6309 | 8.0 | 30 | 2.6370 | 0.1150 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["minds14"], "metrics": ["accuracy"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "my_awesome_mind_model", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "minds14", "type": "minds14", "config": "en-US", "split": "train", "args": "en-US"}, "metrics": [{"type": "accuracy", "value": 0.11504424778761062, "name": "Accuracy"}]}]}]}
uaremine/my_awesome_mind_model
null
[ "transformers", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:minds14", "base_model:facebook/wav2vec2-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-12T02:57:42+00:00
[]
[]
TAGS #transformers #safetensors #wav2vec2 #audio-classification #generated_from_trainer #dataset-minds14 #base_model-facebook/wav2vec2-base #license-apache-2.0 #model-index #endpoints_compatible #region-us
my\_awesome\_mind\_model ======================== This model is a fine-tuned version of facebook/wav2vec2-base on the minds14 dataset. It achieves the following results on the evaluation set: * Loss: 2.6370 * Accuracy: 0.1150 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #wav2vec2 #audio-classification #generated_from_trainer #dataset-minds14 #base_model-facebook/wav2vec2-base #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
joedicator/Enlighten_Instruct
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-12T02:58:29+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6110 - Bleu: 5.5726 - Gen Len: 17.5882 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 1.8494 | 1.0 | 6355 | 1.6346 | 5.4296 | 17.5984 | | 1.8279 | 2.0 | 12710 | 1.6110 | 5.5726 | 17.5882 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "google-t5/t5-small", "model-index": [{"name": "my_awesome_opus_books_model", "results": []}]}
realtiff/my_awesome_opus_books_model
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T03:00:40+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
my\_awesome\_opus\_books\_model =============================== This model is a fine-tuned version of google-t5/t5-small on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.6110 * Bleu: 5.5726 * Gen Len: 17.5882 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Hermes-2-Pro-Mistral-7B - model: WizardLM/WizardMath-7B-V1.1 merge_method: slerp base_model: NousResearch/Hermes-2-Pro-Mistral-7B dtype: bfloat16 parameters: t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["NousResearch/Hermes-2-Pro-Mistral-7B", "WizardLM/WizardMath-7B-V1.1"]}
mergekit-community/mergekit-slerp-lnhotro
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:WizardLM/WizardMath-7B-V1.1", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T03:01:09+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * NousResearch/Hermes-2-Pro-Mistral-7B * WizardLM/WizardMath-7B-V1.1 ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
text-generation
transformers
## Exllama v2 Quantizations of SlimHercules-4.0-Mistral-7B-v0.2 Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.18">turboderp's ExLlamaV2 v0.0.18</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/Locutusque/SlimHercules-4.0-Mistral-7B-v0.2 ## Prompt format ``` <|im_start|>system {message}<|im_end|> <|im_start|>user {user message}<|im_end|> <|im_start|>call {function call message}<|im_end|> <|im_start|>function {function response message}<|im_end|> <|im_start|>assistant {assistant message}</s> ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/SlimHercules-4.0-Mistral-7B-v0.2-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/SlimHercules-4.0-Mistral-7B-v0.2-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/SlimHercules-4.0-Mistral-7B-v0.2-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/SlimHercules-4.0-Mistral-7B-v0.2-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/SlimHercules-4.0-Mistral-7B-v0.2-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/SlimHercules-4.0-Mistral-7B-v0.2-exl2 SlimHercules-4.0-Mistral-7B-v0.2-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `SlimHercules-4.0-Mistral-7B-v0.2-exl2`: ```shell mkdir SlimHercules-4.0-Mistral-7B-v0.2-exl2 huggingface-cli download bartowski/SlimHercules-4.0-Mistral-7B-v0.2-exl2 --local-dir SlimHercules-4.0-Mistral-7B-v0.2-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir SlimHercules-4.0-Mistral-7B-v0.2-exl2-6_5 huggingface-cli download bartowski/SlimHercules-4.0-Mistral-7B-v0.2-exl2 --revision 6_5 --local-dir SlimHercules-4.0-Mistral-7B-v0.2-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir SlimHercules-4.0-Mistral-7B-v0.2-exl2-6.5 huggingface-cli download bartowski/SlimHercules-4.0-Mistral-7B-v0.2-exl2 --revision 6_5 --local-dir SlimHercules-4.0-Mistral-7B-v0.2-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["medical", "science", "biology", "chemistry", "not-for-all-audiences"], "datasets": ["Locutusque/hercules-v4.0"], "base_model": "alpindale/Mistral-7B-v0.2-hf", "quantized_by": "bartowski", "pipeline_tag": "text-generation"}
bartowski/SlimHercules-4.0-Mistral-7B-v0.2-exl2
null
[ "transformers", "medical", "science", "biology", "chemistry", "not-for-all-audiences", "text-generation", "en", "dataset:Locutusque/hercules-v4.0", "base_model:alpindale/Mistral-7B-v0.2-hf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-12T03:03:29+00:00
[]
[ "en" ]
TAGS #transformers #medical #science #biology #chemistry #not-for-all-audiences #text-generation #en #dataset-Locutusque/hercules-v4.0 #base_model-alpindale/Mistral-7B-v0.2-hf #license-apache-2.0 #endpoints_compatible #region-us
Exllama v2 Quantizations of SlimHercules-4.0-Mistral-7B-v0.2 ------------------------------------------------------------ Using <a href="URL ExLlamaV2 v0.0.18 for quantization. **The "main" branch only contains the URL, download one of the other branches for the model (see below)** Each branch contains an individual bits per weight, with the main one containing only the URL for further conversions. Original model: URL Prompt format ------------- Available sizes --------------- Download instructions --------------------- With git: With huggingface hub (credit to TheBloke for instructions): To download the 'main' (only useful if you only care about URL) branch to a folder called 'SlimHercules-4.0-Mistral-7B-v0.2-exl2': To download from a different branch, add the '--revision' parameter: Linux: Windows (which apparently doesn't like \_ in folders sometimes?): Want to support my work? Visit my ko-fi page here: URL
[]
[ "TAGS\n#transformers #medical #science #biology #chemistry #not-for-all-audiences #text-generation #en #dataset-Locutusque/hercules-v4.0 #base_model-alpindale/Mistral-7B-v0.2-hf #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
# DavidAU/Eclectic-Maid-7B-v2-Q6_K-GGUF This model was converted to GGUF format from [`ND911/Eclectic-Maid-7B-v2`](https://huggingface.co/ND911/Eclectic-Maid-7B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ND911/Eclectic-Maid-7B-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Eclectic-Maid-7B-v2-Q6_K-GGUF --model eclectic-maid-7b-v2.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Eclectic-Maid-7B-v2-Q6_K-GGUF --model eclectic-maid-7b-v2.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m eclectic-maid-7b-v2.Q6_K.gguf -n 128 ```
{"library_name": "transformers", "tags": ["mergekit", "merge", "not-for-all-audiences", "llama-cpp", "gguf-my-repo"], "base_model": []}
DavidAU/Eclectic-Maid-7B-v2-Q6_K-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "not-for-all-audiences", "llama-cpp", "gguf-my-repo", "endpoints_compatible", "region:us" ]
null
2024-04-12T03:04:43+00:00
[]
[]
TAGS #transformers #gguf #mergekit #merge #not-for-all-audiences #llama-cpp #gguf-my-repo #endpoints_compatible #region-us
# DavidAU/Eclectic-Maid-7B-v2-Q6_K-GGUF This model was converted to GGUF format from 'ND911/Eclectic-Maid-7B-v2' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Eclectic-Maid-7B-v2-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/Eclectic-Maid-7B-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #mergekit #merge #not-for-all-audiences #llama-cpp #gguf-my-repo #endpoints_compatible #region-us \n", "# DavidAU/Eclectic-Maid-7B-v2-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/Eclectic-Maid-7B-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# DavidAU/Eclectic-Maid-7B-Q6_K-GGUF This model was converted to GGUF format from [`ND911/Eclectic-Maid-7B`](https://huggingface.co/ND911/Eclectic-Maid-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ND911/Eclectic-Maid-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Eclectic-Maid-7B-Q6_K-GGUF --model eclectic-maid-7b.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Eclectic-Maid-7B-Q6_K-GGUF --model eclectic-maid-7b.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m eclectic-maid-7b.Q6_K.gguf -n 128 ```
{"library_name": "transformers", "tags": ["mergekit", "merge", "not-for-all-audiences", "llama-cpp", "gguf-my-repo"], "base_model": []}
DavidAU/Eclectic-Maid-7B-Q6_K-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "not-for-all-audiences", "llama-cpp", "gguf-my-repo", "endpoints_compatible", "region:us" ]
null
2024-04-12T03:05:52+00:00
[]
[]
TAGS #transformers #gguf #mergekit #merge #not-for-all-audiences #llama-cpp #gguf-my-repo #endpoints_compatible #region-us
# DavidAU/Eclectic-Maid-7B-Q6_K-GGUF This model was converted to GGUF format from 'ND911/Eclectic-Maid-7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Eclectic-Maid-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/Eclectic-Maid-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #mergekit #merge #not-for-all-audiences #llama-cpp #gguf-my-repo #endpoints_compatible #region-us \n", "# DavidAU/Eclectic-Maid-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/Eclectic-Maid-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# DavidAU/NSFW_DPO_Noromaid-7B-v2-Q6_K-GGUF This model was converted to GGUF format from [`ND911/NSFW_DPO_Noromaid-7B-v2`](https://huggingface.co/ND911/NSFW_DPO_Noromaid-7B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ND911/NSFW_DPO_Noromaid-7B-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/NSFW_DPO_Noromaid-7B-v2-Q6_K-GGUF --model nsfw_dpo_noromaid-7b-v2.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/NSFW_DPO_Noromaid-7B-v2-Q6_K-GGUF --model nsfw_dpo_noromaid-7b-v2.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m nsfw_dpo_noromaid-7b-v2.Q6_K.gguf -n 128 ```
{"library_name": "transformers", "tags": ["mergekit", "merge", "not-for-all-audiences", "llama-cpp", "gguf-my-repo"], "base_model": ["mistralai/Mistral-7B-Instruct-v0.2", "athirdpath/NSFW_DPO_Noromaid-7b"]}
DavidAU/NSFW_DPO_Noromaid-7B-v2-Q6_K-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "not-for-all-audiences", "llama-cpp", "gguf-my-repo", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:athirdpath/NSFW_DPO_Noromaid-7b", "endpoints_compatible", "region:us" ]
null
2024-04-12T03:07:02+00:00
[]
[]
TAGS #transformers #gguf #mergekit #merge #not-for-all-audiences #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-athirdpath/NSFW_DPO_Noromaid-7b #endpoints_compatible #region-us
# DavidAU/NSFW_DPO_Noromaid-7B-v2-Q6_K-GGUF This model was converted to GGUF format from 'ND911/NSFW_DPO_Noromaid-7B-v2' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/NSFW_DPO_Noromaid-7B-v2-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/NSFW_DPO_Noromaid-7B-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #mergekit #merge #not-for-all-audiences #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-athirdpath/NSFW_DPO_Noromaid-7b #endpoints_compatible #region-us \n", "# DavidAU/NSFW_DPO_Noromaid-7B-v2-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/NSFW_DPO_Noromaid-7B-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
AI4DS/DeepSeek-NL2SQL-33B
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T03:08:23+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# DavidAU/Franken-MistressMaid-7B-v2-Q6_K-GGUF This model was converted to GGUF format from [`ND911/Franken-MistressMaid-7B-v2`](https://huggingface.co/ND911/Franken-MistressMaid-7B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ND911/Franken-MistressMaid-7B-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Franken-MistressMaid-7B-v2-Q6_K-GGUF --model franken-mistressmaid-7b-v2.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Franken-MistressMaid-7B-v2-Q6_K-GGUF --model franken-mistressmaid-7b-v2.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m franken-mistressmaid-7b-v2.Q6_K.gguf -n 128 ```
{"library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["ibm/merlinite-7b", "l3utterfly/mistral-7b-v0.1-layla-v4", "Undi95/Toppy-M-7B", "senseable/WestLake-7B-v2", "jondurbin/bagel-dpo-7b-v0.4"]}
DavidAU/Franken-MistressMaid-7B-v2-Q6_K-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:ibm/merlinite-7b", "base_model:l3utterfly/mistral-7b-v0.1-layla-v4", "base_model:Undi95/Toppy-M-7B", "base_model:senseable/WestLake-7B-v2", "base_model:jondurbin/bagel-dpo-7b-v0.4", "endpoints_compatible", "region:us" ]
null
2024-04-12T03:13:55+00:00
[]
[]
TAGS #transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-ibm/merlinite-7b #base_model-l3utterfly/mistral-7b-v0.1-layla-v4 #base_model-Undi95/Toppy-M-7B #base_model-senseable/WestLake-7B-v2 #base_model-jondurbin/bagel-dpo-7b-v0.4 #endpoints_compatible #region-us
# DavidAU/Franken-MistressMaid-7B-v2-Q6_K-GGUF This model was converted to GGUF format from 'ND911/Franken-MistressMaid-7B-v2' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Franken-MistressMaid-7B-v2-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/Franken-MistressMaid-7B-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-ibm/merlinite-7b #base_model-l3utterfly/mistral-7b-v0.1-layla-v4 #base_model-Undi95/Toppy-M-7B #base_model-senseable/WestLake-7B-v2 #base_model-jondurbin/bagel-dpo-7b-v0.4 #endpoints_compatible #region-us \n", "# DavidAU/Franken-MistressMaid-7B-v2-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/Franken-MistressMaid-7B-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# DavidAU/Franken-MistressMaid-7B-Q6_K-GGUF This model was converted to GGUF format from [`ND911/Franken-MistressMaid-7B`](https://huggingface.co/ND911/Franken-MistressMaid-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ND911/Franken-MistressMaid-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Franken-MistressMaid-7B-Q6_K-GGUF --model franken-mistressmaid-7b.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Franken-MistressMaid-7B-Q6_K-GGUF --model franken-mistressmaid-7b.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m franken-mistressmaid-7b.Q6_K.gguf -n 128 ```
{"library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["ibm/merlinite-7b", "mistralai/Mistral-7B-v0.1", "l3utterfly/mistral-7b-v0.1-layla-v4", "SanjiWatsuki/Sonya-7B", "NeverSleep/Noromaid-7b-v0.2", "migtissera/SynthIA-7B-v1.3"]}
DavidAU/Franken-MistressMaid-7B-Q6_K-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:ibm/merlinite-7b", "base_model:mistralai/Mistral-7B-v0.1", "base_model:l3utterfly/mistral-7b-v0.1-layla-v4", "base_model:SanjiWatsuki/Sonya-7B", "base_model:NeverSleep/Noromaid-7b-v0.2", "base_model:migtissera/SynthIA-7B-v1.3", "endpoints_compatible", "region:us" ]
null
2024-04-12T03:15:48+00:00
[]
[]
TAGS #transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-ibm/merlinite-7b #base_model-mistralai/Mistral-7B-v0.1 #base_model-l3utterfly/mistral-7b-v0.1-layla-v4 #base_model-SanjiWatsuki/Sonya-7B #base_model-NeverSleep/Noromaid-7b-v0.2 #base_model-migtissera/SynthIA-7B-v1.3 #endpoints_compatible #region-us
# DavidAU/Franken-MistressMaid-7B-Q6_K-GGUF This model was converted to GGUF format from 'ND911/Franken-MistressMaid-7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Franken-MistressMaid-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/Franken-MistressMaid-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-ibm/merlinite-7b #base_model-mistralai/Mistral-7B-v0.1 #base_model-l3utterfly/mistral-7b-v0.1-layla-v4 #base_model-SanjiWatsuki/Sonya-7B #base_model-NeverSleep/Noromaid-7b-v0.2 #base_model-migtissera/SynthIA-7B-v1.3 #endpoints_compatible #region-us \n", "# DavidAU/Franken-MistressMaid-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/Franken-MistressMaid-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.1"}
ilya-ilya/Mistral_Alpaca
null
[ "peft", "safetensors", "mistral", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "region:us" ]
null
2024-04-12T03:15:55+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #mistral #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.1 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #mistral #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.1 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
text-generation
transformers
# CalmexperimentPercival_01-7B CalmexperimentPercival_01-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [allknowingroger/CalmExperiment-7B-slerp](https://huggingface.co/allknowingroger/CalmExperiment-7B-slerp) * [AurelPx/Percival_01-7b-slerp](https://huggingface.co/AurelPx/Percival_01-7b-slerp) ## 🧩 Configuration ```yaml slices: - sources: - model: allknowingroger/CalmExperiment-7B-slerp layer_range: [0, 32] - model: AurelPx/Percival_01-7b-slerp layer_range: [0, 32] merge_method: slerp base_model: allknowingroger/CalmExperiment-7B-slerp parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/CalmexperimentPercival_01-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"], "base_model": ["allknowingroger/CalmExperiment-7B-slerp", "AurelPx/Percival_01-7b-slerp"]}
automerger/CalmexperimentPercival_01-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:allknowingroger/CalmExperiment-7B-slerp", "base_model:AurelPx/Percival_01-7b-slerp", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T03:19:03+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #base_model-allknowingroger/CalmExperiment-7B-slerp #base_model-AurelPx/Percival_01-7b-slerp #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# CalmexperimentPercival_01-7B CalmexperimentPercival_01-7B is an automated merge created by Maxime Labonne using the following configuration. * allknowingroger/CalmExperiment-7B-slerp * AurelPx/Percival_01-7b-slerp ## Configuration ## Usage
[ "# CalmexperimentPercival_01-7B\n\nCalmexperimentPercival_01-7B is an automated merge created by Maxime Labonne using the following configuration.\n* allknowingroger/CalmExperiment-7B-slerp\n* AurelPx/Percival_01-7b-slerp", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #base_model-allknowingroger/CalmExperiment-7B-slerp #base_model-AurelPx/Percival_01-7b-slerp #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# CalmexperimentPercival_01-7B\n\nCalmexperimentPercival_01-7B is an automated merge created by Maxime Labonne using the following configuration.\n* allknowingroger/CalmExperiment-7B-slerp\n* AurelPx/Percival_01-7b-slerp", "## Configuration", "## Usage" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Grayx/unstable_87
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T03:19:40+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# DavidAU/EE-LMaid-7B-Slerp-Q6_K-GGUF This model was converted to GGUF format from [`ND911/EE-LMaid-7B-Slerp`](https://huggingface.co/ND911/EE-LMaid-7B-Slerp) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ND911/EE-LMaid-7B-Slerp) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/EE-LMaid-7B-Slerp-Q6_K-GGUF --model ee-lmaid-7b-slerp.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/EE-LMaid-7B-Slerp-Q6_K-GGUF --model ee-lmaid-7b-slerp.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m ee-lmaid-7b-slerp.Q6_K.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "SanjiWatsuki/Loyal-Macaroni-Maid-7B", "ND911/EE-Maid-7B-Slerp", "llama-cpp", "gguf-my-repo"], "base_model": ["SanjiWatsuki/Loyal-Macaroni-Maid-7B", "ND911/EE-Maid-7B-Slerp"]}
DavidAU/EE-LMaid-7B-Slerp-Q6_K-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "SanjiWatsuki/Loyal-Macaroni-Maid-7B", "ND911/EE-Maid-7B-Slerp", "llama-cpp", "gguf-my-repo", "base_model:SanjiWatsuki/Loyal-Macaroni-Maid-7B", "base_model:ND911/EE-Maid-7B-Slerp", "region:us" ]
null
2024-04-12T03:20:50+00:00
[]
[]
TAGS #gguf #merge #mergekit #lazymergekit #SanjiWatsuki/Loyal-Macaroni-Maid-7B #ND911/EE-Maid-7B-Slerp #llama-cpp #gguf-my-repo #base_model-SanjiWatsuki/Loyal-Macaroni-Maid-7B #base_model-ND911/EE-Maid-7B-Slerp #region-us
# DavidAU/EE-LMaid-7B-Slerp-Q6_K-GGUF This model was converted to GGUF format from 'ND911/EE-LMaid-7B-Slerp' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/EE-LMaid-7B-Slerp-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/EE-LMaid-7B-Slerp' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #merge #mergekit #lazymergekit #SanjiWatsuki/Loyal-Macaroni-Maid-7B #ND911/EE-Maid-7B-Slerp #llama-cpp #gguf-my-repo #base_model-SanjiWatsuki/Loyal-Macaroni-Maid-7B #base_model-ND911/EE-Maid-7B-Slerp #region-us \n", "# DavidAU/EE-LMaid-7B-Slerp-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/EE-LMaid-7B-Slerp' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-manimml This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1364 - Rouge1: 26.0604 - Rouge2: 19.9761 - Rougel: 25.9003 - Rougelsum: 25.9723 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 50 | 1.7080 | 4.4713 | 0.3984 | 4.3577 | 4.3315 | 19.0 | | No log | 2.0 | 100 | 0.9122 | 9.4373 | 1.9479 | 8.8784 | 8.85 | 19.0 | | No log | 3.0 | 150 | 0.4992 | 20.791 | 11.5065 | 20.1233 | 20.033 | 19.0 | | No log | 4.0 | 200 | 0.3168 | 26.5087 | 16.3231 | 25.5779 | 25.5911 | 19.0 | | No log | 5.0 | 250 | 0.2367 | 25.782 | 17.2565 | 25.2582 | 25.2519 | 19.0 | | No log | 6.0 | 300 | 0.1876 | 25.8775 | 18.2261 | 25.4038 | 25.4187 | 19.0 | | No log | 7.0 | 350 | 0.1612 | 25.4884 | 18.3963 | 25.1829 | 25.2095 | 18.995 | | No log | 8.0 | 400 | 0.1474 | 26.1811 | 19.6167 | 25.8602 | 25.9093 | 19.0 | | No log | 9.0 | 450 | 0.1393 | 25.9726 | 19.8381 | 25.7971 | 25.8603 | 19.0 | | 0.9353 | 10.0 | 500 | 0.1364 | 26.0604 | 19.9761 | 25.9003 | 25.9723 | 19.0 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-small", "model-index": [{"name": "t5-small-finetuned-manimml", "results": []}]}
Quanda2369/t5-small-finetuned-manimml
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T03:21:38+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-manimml ========================== This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1364 * Rouge1: 26.0604 * Rouge2: 19.9761 * Rougel: 25.9003 * Rougelsum: 25.9723 * Gen Len: 19.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # long-t5-local-base-finetuned-justification-v09 This model is a fine-tuned version of [google/long-t5-local-base](https://huggingface.co/google/long-t5-local-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3147 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-07 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | No log | 1.0 | 338 | 20.0779 | | 26.617 | 2.0 | 676 | 17.6054 | | 22.6857 | 3.0 | 1014 | 15.1205 | | 22.6857 | 4.0 | 1352 | 12.4837 | | 18.639 | 5.0 | 1690 | 9.9114 | | 14.4577 | 6.0 | 2028 | 8.0629 | | 14.4577 | 7.0 | 2366 | 7.5255 | | 10.7004 | 8.0 | 2704 | 7.4006 | | 7.8669 | 9.0 | 3042 | 7.2827 | | 7.8669 | 10.0 | 3380 | 7.1306 | | 6.3058 | 11.0 | 3718 | 6.9313 | | 5.3507 | 12.0 | 4056 | 6.6880 | | 5.3507 | 13.0 | 4394 | 6.3980 | | 5.0661 | 14.0 | 4732 | 6.1019 | | 4.6576 | 15.0 | 5070 | 5.7985 | | 4.6576 | 16.0 | 5408 | 5.4902 | | 4.374 | 17.0 | 5746 | 5.2013 | | 4.1022 | 18.0 | 6084 | 4.9162 | | 4.1022 | 19.0 | 6422 | 4.6802 | | 3.9773 | 20.0 | 6760 | 4.4889 | | 3.7391 | 21.0 | 7098 | 4.3299 | | 3.7391 | 22.0 | 7436 | 4.2127 | | 3.6007 | 23.0 | 7774 | 4.1193 | | 3.472 | 24.0 | 8112 | 4.0468 | | 3.472 | 25.0 | 8450 | 3.9895 | | 3.3327 | 26.0 | 8788 | 3.9357 | | 3.3196 | 27.0 | 9126 | 3.8895 | | 3.3196 | 28.0 | 9464 | 3.8449 | | 3.229 | 29.0 | 9802 | 3.8026 | | 3.1795 | 30.0 | 10140 | 3.7613 | | 3.1795 | 31.0 | 10478 | 3.7200 | | 3.0775 | 32.0 | 10816 | 3.6811 | | 3.065 | 33.0 | 11154 | 3.6424 | | 3.065 | 34.0 | 11492 | 3.6048 | | 3.0145 | 35.0 | 11830 | 3.5750 | | 2.9987 | 36.0 | 12168 | 3.5381 | | 2.9096 | 37.0 | 12506 | 3.5031 | | 2.9096 | 38.0 | 12844 | 3.4699 | | 2.8816 | 39.0 | 13182 | 3.4402 | | 2.8767 | 40.0 | 13520 | 3.4116 | | 2.8767 | 41.0 | 13858 | 3.3847 | | 2.8189 | 42.0 | 14196 | 3.3540 | | 2.8297 | 43.0 | 14534 | 3.3275 | | 2.8297 | 44.0 | 14872 | 3.3008 | | 2.7376 | 45.0 | 15210 | 3.2745 | | 2.7519 | 46.0 | 15548 | 3.2521 | | 2.7519 | 47.0 | 15886 | 3.2273 | | 2.7207 | 48.0 | 16224 | 3.2038 | | 2.7056 | 49.0 | 16562 | 3.1822 | | 2.7056 | 50.0 | 16900 | 3.1619 | | 2.6539 | 51.0 | 17238 | 3.1426 | | 2.6393 | 52.0 | 17576 | 3.1219 | | 2.6393 | 53.0 | 17914 | 3.1015 | | 2.6396 | 54.0 | 18252 | 3.0818 | | 2.6029 | 55.0 | 18590 | 3.0604 | | 2.6029 | 56.0 | 18928 | 3.0448 | | 2.5527 | 57.0 | 19266 | 3.0251 | | 2.5793 | 58.0 | 19604 | 3.0069 | | 2.5793 | 59.0 | 19942 | 2.9911 | | 2.5443 | 60.0 | 20280 | 2.9724 | | 2.5083 | 61.0 | 20618 | 2.9560 | | 2.5083 | 62.0 | 20956 | 2.9387 | | 2.5368 | 63.0 | 21294 | 2.9205 | | 2.4771 | 64.0 | 21632 | 2.9040 | | 2.4771 | 65.0 | 21970 | 2.8895 | | 2.4875 | 66.0 | 22308 | 2.8701 | | 2.4532 | 67.0 | 22646 | 2.8570 | | 2.4532 | 68.0 | 22984 | 2.8397 | | 2.4276 | 69.0 | 23322 | 2.8243 | | 2.4279 | 70.0 | 23660 | 2.8110 | | 2.4279 | 71.0 | 23998 | 2.7950 | | 2.3944 | 72.0 | 24336 | 2.7816 | | 2.3907 | 73.0 | 24674 | 2.7704 | | 2.4014 | 74.0 | 25012 | 2.7564 | | 2.4014 | 75.0 | 25350 | 2.7423 | | 2.3698 | 76.0 | 25688 | 2.7295 | | 2.3408 | 77.0 | 26026 | 2.7172 | | 2.3408 | 78.0 | 26364 | 2.7046 | | 2.3404 | 79.0 | 26702 | 2.6916 | | 2.316 | 80.0 | 27040 | 2.6827 | | 2.316 | 81.0 | 27378 | 2.6706 | | 2.3322 | 82.0 | 27716 | 2.6607 | | 2.3005 | 83.0 | 28054 | 2.6500 | | 2.3005 | 84.0 | 28392 | 2.6408 | | 2.2661 | 85.0 | 28730 | 2.6315 | | 2.2946 | 86.0 | 29068 | 2.6231 | | 2.2946 | 87.0 | 29406 | 2.6131 | | 2.2493 | 88.0 | 29744 | 2.6034 | | 2.2623 | 89.0 | 30082 | 2.5940 | | 2.2623 | 90.0 | 30420 | 2.5857 | | 2.2464 | 91.0 | 30758 | 2.5777 | | 2.2203 | 92.0 | 31096 | 2.5714 | | 2.2203 | 93.0 | 31434 | 2.5641 | | 2.233 | 94.0 | 31772 | 2.5562 | | 2.2101 | 95.0 | 32110 | 2.5493 | | 2.2101 | 96.0 | 32448 | 2.5435 | | 2.2321 | 97.0 | 32786 | 2.5376 | | 2.1743 | 98.0 | 33124 | 2.5304 | | 2.1743 | 99.0 | 33462 | 2.5253 | | 2.2033 | 100.0 | 33800 | 2.5202 | | 2.1874 | 101.0 | 34138 | 2.5154 | | 2.1874 | 102.0 | 34476 | 2.5092 | | 2.1615 | 103.0 | 34814 | 2.5054 | | 2.1565 | 104.0 | 35152 | 2.5001 | | 2.1565 | 105.0 | 35490 | 2.4950 | | 2.152 | 106.0 | 35828 | 2.4897 | | 2.1398 | 107.0 | 36166 | 2.4851 | | 2.1424 | 108.0 | 36504 | 2.4812 | | 2.1424 | 109.0 | 36842 | 2.4767 | | 2.1272 | 110.0 | 37180 | 2.4734 | | 2.1171 | 111.0 | 37518 | 2.4686 | | 2.1171 | 112.0 | 37856 | 2.4649 | | 2.1325 | 113.0 | 38194 | 2.4597 | | 2.0975 | 114.0 | 38532 | 2.4567 | | 2.0975 | 115.0 | 38870 | 2.4523 | | 2.1156 | 116.0 | 39208 | 2.4487 | | 2.0628 | 117.0 | 39546 | 2.4452 | | 2.0628 | 118.0 | 39884 | 2.4417 | | 2.1061 | 119.0 | 40222 | 2.4385 | | 2.0897 | 120.0 | 40560 | 2.4343 | | 2.0897 | 121.0 | 40898 | 2.4316 | | 2.083 | 122.0 | 41236 | 2.4271 | | 2.0693 | 123.0 | 41574 | 2.4241 | | 2.0693 | 124.0 | 41912 | 2.4212 | | 2.0748 | 125.0 | 42250 | 2.4180 | | 2.0497 | 126.0 | 42588 | 2.4152 | | 2.0497 | 127.0 | 42926 | 2.4128 | | 2.0803 | 128.0 | 43264 | 2.4098 | | 2.0701 | 129.0 | 43602 | 2.4060 | | 2.0701 | 130.0 | 43940 | 2.4032 | | 2.0358 | 131.0 | 44278 | 2.4010 | | 2.0487 | 132.0 | 44616 | 2.3981 | | 2.0487 | 133.0 | 44954 | 2.3956 | | 2.0402 | 134.0 | 45292 | 2.3927 | | 2.0425 | 135.0 | 45630 | 2.3895 | | 2.0425 | 136.0 | 45968 | 2.3873 | | 2.0379 | 137.0 | 46306 | 2.3844 | | 2.0297 | 138.0 | 46644 | 2.3818 | | 2.0297 | 139.0 | 46982 | 2.3785 | | 2.046 | 140.0 | 47320 | 2.3766 | | 2.0066 | 141.0 | 47658 | 2.3739 | | 2.0066 | 142.0 | 47996 | 2.3712 | | 2.0186 | 143.0 | 48334 | 2.3696 | | 2.0474 | 144.0 | 48672 | 2.3669 | | 1.9858 | 145.0 | 49010 | 2.3652 | | 1.9858 | 146.0 | 49348 | 2.3631 | | 2.0216 | 147.0 | 49686 | 2.3609 | | 1.9961 | 148.0 | 50024 | 2.3588 | | 1.9961 | 149.0 | 50362 | 2.3573 | | 1.9873 | 150.0 | 50700 | 2.3554 | | 2.0043 | 151.0 | 51038 | 2.3530 | | 2.0043 | 152.0 | 51376 | 2.3508 | | 2.0045 | 153.0 | 51714 | 2.3490 | | 1.9951 | 154.0 | 52052 | 2.3475 | | 1.9951 | 155.0 | 52390 | 2.3458 | | 2.02 | 156.0 | 52728 | 2.3448 | | 1.9924 | 157.0 | 53066 | 2.3429 | | 1.9924 | 158.0 | 53404 | 2.3410 | | 1.9757 | 159.0 | 53742 | 2.3398 | | 1.9882 | 160.0 | 54080 | 2.3383 | | 1.9882 | 161.0 | 54418 | 2.3368 | | 2.0006 | 162.0 | 54756 | 2.3355 | | 1.9984 | 163.0 | 55094 | 2.3341 | | 1.9984 | 164.0 | 55432 | 2.3331 | | 1.9823 | 165.0 | 55770 | 2.3318 | | 1.9548 | 166.0 | 56108 | 2.3309 | | 1.9548 | 167.0 | 56446 | 2.3297 | | 1.9812 | 168.0 | 56784 | 2.3288 | | 1.9793 | 169.0 | 57122 | 2.3276 | | 1.9793 | 170.0 | 57460 | 2.3264 | | 2.0022 | 171.0 | 57798 | 2.3255 | | 1.9593 | 172.0 | 58136 | 2.3248 | | 1.9593 | 173.0 | 58474 | 2.3236 | | 1.9756 | 174.0 | 58812 | 2.3228 | | 1.9835 | 175.0 | 59150 | 2.3221 | | 1.9835 | 176.0 | 59488 | 2.3214 | | 1.9655 | 177.0 | 59826 | 2.3208 | | 1.9712 | 178.0 | 60164 | 2.3202 | | 1.9658 | 179.0 | 60502 | 2.3195 | | 1.9658 | 180.0 | 60840 | 2.3188 | | 1.9501 | 181.0 | 61178 | 2.3185 | | 1.992 | 182.0 | 61516 | 2.3180 | | 1.992 | 183.0 | 61854 | 2.3176 | | 1.9784 | 184.0 | 62192 | 2.3172 | | 1.968 | 185.0 | 62530 | 2.3169 | | 1.968 | 186.0 | 62868 | 2.3165 | | 1.9746 | 187.0 | 63206 | 2.3161 | | 1.9615 | 188.0 | 63544 | 2.3159 | | 1.9615 | 189.0 | 63882 | 2.3157 | | 1.9405 | 190.0 | 64220 | 2.3155 | | 1.9869 | 191.0 | 64558 | 2.3153 | | 1.9869 | 192.0 | 64896 | 2.3152 | | 1.9614 | 193.0 | 65234 | 2.3150 | | 1.9641 | 194.0 | 65572 | 2.3149 | | 1.9641 | 195.0 | 65910 | 2.3148 | | 1.9813 | 196.0 | 66248 | 2.3148 | | 1.9676 | 197.0 | 66586 | 2.3147 | | 1.9676 | 198.0 | 66924 | 2.3147 | | 1.9302 | 199.0 | 67262 | 2.3147 | | 1.99 | 200.0 | 67600 | 2.3147 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/long-t5-local-base", "model-index": [{"name": "long-t5-local-base-finetuned-justification-v09", "results": []}]}
satyanshu404/long-t5-local-base-finetuned-justification-v09
null
[ "transformers", "safetensors", "longt5", "text2text-generation", "generated_from_trainer", "base_model:google/long-t5-local-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T03:22:05+00:00
[]
[]
TAGS #transformers #safetensors #longt5 #text2text-generation #generated_from_trainer #base_model-google/long-t5-local-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
long-t5-local-base-finetuned-justification-v09 ============================================== This model is a fine-tuned version of google/long-t5-local-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.3147 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-07 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 200 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 200", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #longt5 #text2text-generation #generated_from_trainer #base_model-google/long-t5-local-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 200", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
null
# DavidAU/EE-Maid-7B-Slerp-Q6_K-GGUF This model was converted to GGUF format from [`ND911/EE-Maid-7B-Slerp`](https://huggingface.co/ND911/EE-Maid-7B-Slerp) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ND911/EE-Maid-7B-Slerp) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/EE-Maid-7B-Slerp-Q6_K-GGUF --model ee-maid-7b-slerp.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/EE-Maid-7B-Slerp-Q6_K-GGUF --model ee-maid-7b-slerp.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m ee-maid-7b-slerp.Q6_K.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "SanjiWatsuki/Silicon-Maid-7B", "maywell/Synatra-7B-v0.3-RP", "llama-cpp", "gguf-my-repo"], "base_model": ["SanjiWatsuki/Silicon-Maid-7B", "maywell/Synatra-7B-v0.3-RP"]}
DavidAU/EE-Maid-7B-Slerp-Q6_K-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "SanjiWatsuki/Silicon-Maid-7B", "maywell/Synatra-7B-v0.3-RP", "llama-cpp", "gguf-my-repo", "base_model:SanjiWatsuki/Silicon-Maid-7B", "base_model:maywell/Synatra-7B-v0.3-RP", "region:us" ]
null
2024-04-12T03:23:13+00:00
[]
[]
TAGS #gguf #merge #mergekit #lazymergekit #SanjiWatsuki/Silicon-Maid-7B #maywell/Synatra-7B-v0.3-RP #llama-cpp #gguf-my-repo #base_model-SanjiWatsuki/Silicon-Maid-7B #base_model-maywell/Synatra-7B-v0.3-RP #region-us
# DavidAU/EE-Maid-7B-Slerp-Q6_K-GGUF This model was converted to GGUF format from 'ND911/EE-Maid-7B-Slerp' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/EE-Maid-7B-Slerp-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/EE-Maid-7B-Slerp' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #merge #mergekit #lazymergekit #SanjiWatsuki/Silicon-Maid-7B #maywell/Synatra-7B-v0.3-RP #llama-cpp #gguf-my-repo #base_model-SanjiWatsuki/Silicon-Maid-7B #base_model-maywell/Synatra-7B-v0.3-RP #region-us \n", "# DavidAU/EE-Maid-7B-Slerp-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/EE-Maid-7B-Slerp' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Irisissocute/my_fine_tuned_biogpt_ncbi
null
[ "transformers", "safetensors", "gpt2", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T03:24:19+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gpt2 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gpt2 #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# DavidAU/Lelantos-Noro-Lamia-Maid-7B-Slerp-Q6_K-GGUF This model was converted to GGUF format from [`ND911/Lelantos-Noro-Lamia-Maid-7B-Slerp`](https://huggingface.co/ND911/Lelantos-Noro-Lamia-Maid-7B-Slerp) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ND911/Lelantos-Noro-Lamia-Maid-7B-Slerp) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Lelantos-Noro-Lamia-Maid-7B-Slerp-Q6_K-GGUF --model lelantos-noro-lamia-maid-7b-slerp.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Lelantos-Noro-Lamia-Maid-7B-Slerp-Q6_K-GGUF --model lelantos-noro-lamia-maid-7b-slerp.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m lelantos-noro-lamia-maid-7b-slerp.Q6_K.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "ND911/EEM-Noro-Lamia-Maid-7B-Slerp", "SanjiWatsuki/Lelantos-Maid-DPO-7B", "llama-cpp", "gguf-my-repo"], "base_model": ["ND911/EEM-Noro-Lamia-Maid-7B-Slerp", "SanjiWatsuki/Lelantos-Maid-DPO-7B"]}
DavidAU/Lelantos-Noro-Lamia-Maid-7B-Slerp-Q6_K-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "ND911/EEM-Noro-Lamia-Maid-7B-Slerp", "SanjiWatsuki/Lelantos-Maid-DPO-7B", "llama-cpp", "gguf-my-repo", "base_model:ND911/EEM-Noro-Lamia-Maid-7B-Slerp", "base_model:SanjiWatsuki/Lelantos-Maid-DPO-7B", "region:us" ]
null
2024-04-12T03:24:20+00:00
[]
[]
TAGS #gguf #merge #mergekit #lazymergekit #ND911/EEM-Noro-Lamia-Maid-7B-Slerp #SanjiWatsuki/Lelantos-Maid-DPO-7B #llama-cpp #gguf-my-repo #base_model-ND911/EEM-Noro-Lamia-Maid-7B-Slerp #base_model-SanjiWatsuki/Lelantos-Maid-DPO-7B #region-us
# DavidAU/Lelantos-Noro-Lamia-Maid-7B-Slerp-Q6_K-GGUF This model was converted to GGUF format from 'ND911/Lelantos-Noro-Lamia-Maid-7B-Slerp' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Lelantos-Noro-Lamia-Maid-7B-Slerp-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/Lelantos-Noro-Lamia-Maid-7B-Slerp' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #merge #mergekit #lazymergekit #ND911/EEM-Noro-Lamia-Maid-7B-Slerp #SanjiWatsuki/Lelantos-Maid-DPO-7B #llama-cpp #gguf-my-repo #base_model-ND911/EEM-Noro-Lamia-Maid-7B-Slerp #base_model-SanjiWatsuki/Lelantos-Maid-DPO-7B #region-us \n", "# DavidAU/Lelantos-Noro-Lamia-Maid-7B-Slerp-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/Lelantos-Noro-Lamia-Maid-7B-Slerp' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/EEM-Noro-Lamia-Maid-7B-Slerp-Q6_K-GGUF This model was converted to GGUF format from [`ND911/EEM-Noro-Lamia-Maid-7B-Slerp`](https://huggingface.co/ND911/EEM-Noro-Lamia-Maid-7B-Slerp) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ND911/EEM-Noro-Lamia-Maid-7B-Slerp) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/EEM-Noro-Lamia-Maid-7B-Slerp-Q6_K-GGUF --model eem-noro-lamia-maid-7b-slerp.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/EEM-Noro-Lamia-Maid-7B-Slerp-Q6_K-GGUF --model eem-noro-lamia-maid-7b-slerp.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m eem-noro-lamia-maid-7b-slerp.Q6_K.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "NeverSleep/Noromaid-7B-0.4-DPO", "openerotica/mistral-7b-lamia-v0.1", "llama-cpp", "gguf-my-repo"], "base_model": ["NeverSleep/Noromaid-7B-0.4-DPO", "openerotica/mistral-7b-lamia-v0.1"]}
DavidAU/EEM-Noro-Lamia-Maid-7B-Slerp-Q6_K-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "NeverSleep/Noromaid-7B-0.4-DPO", "openerotica/mistral-7b-lamia-v0.1", "llama-cpp", "gguf-my-repo", "base_model:NeverSleep/Noromaid-7B-0.4-DPO", "base_model:openerotica/mistral-7b-lamia-v0.1", "region:us" ]
null
2024-04-12T03:25:29+00:00
[]
[]
TAGS #gguf #merge #mergekit #lazymergekit #NeverSleep/Noromaid-7B-0.4-DPO #openerotica/mistral-7b-lamia-v0.1 #llama-cpp #gguf-my-repo #base_model-NeverSleep/Noromaid-7B-0.4-DPO #base_model-openerotica/mistral-7b-lamia-v0.1 #region-us
# DavidAU/EEM-Noro-Lamia-Maid-7B-Slerp-Q6_K-GGUF This model was converted to GGUF format from 'ND911/EEM-Noro-Lamia-Maid-7B-Slerp' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/EEM-Noro-Lamia-Maid-7B-Slerp-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/EEM-Noro-Lamia-Maid-7B-Slerp' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #merge #mergekit #lazymergekit #NeverSleep/Noromaid-7B-0.4-DPO #openerotica/mistral-7b-lamia-v0.1 #llama-cpp #gguf-my-repo #base_model-NeverSleep/Noromaid-7B-0.4-DPO #base_model-openerotica/mistral-7b-lamia-v0.1 #region-us \n", "# DavidAU/EEM-Noro-Lamia-Maid-7B-Slerp-Q6_K-GGUF\nThis model was converted to GGUF format from 'ND911/EEM-Noro-Lamia-Maid-7B-Slerp' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]