pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
tokens_length
sequencelengths
1
723
input_texts
sequencelengths
1
1
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
abc88767/model36
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T01:28:02+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 41, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # your_model_checkpoint-finetuned-your_task This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8249 - Matthews Correlation: 0.5196 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4889 | 1.0 | 1069 | 0.4690 | 0.4181 | | 0.3321 | 2.0 | 2138 | 0.5897 | 0.5184 | | 0.2233 | 3.0 | 3207 | 0.8249 | 0.5196 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["matthews_correlation"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "your_model_checkpoint-finetuned-your_task", "results": []}]}
hanzla107/your_model_checkpoint-finetuned-your_task
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T01:28:45+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
your\_model\_checkpoint-finetuned-your\_task ============================================ This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.8249 * Matthews Correlation: 0.5196 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 59, 101, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
andersonbcdefg/tiny-emb-2024-05-02_01-29-42
null
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T01:29:42+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 32, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-2 This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-2", "results": []}]}
AlignmentResearch/robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-2
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-160m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T01:29:42+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-2 This model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-2\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-2\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ 70, 58, 7, 9, 9, 4, 93, 5, 40 ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-2\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-3 This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-3", "results": []}]}
AlignmentResearch/robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-3
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-160m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T01:32:44+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-3 This model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ 70, 58, 7, 9, 9, 4, 93, 5, 40 ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-4 This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-4", "results": []}]}
AlignmentResearch/robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-4
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-160m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T01:34:21+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-4 This model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ 70, 58, 7, 9, 9, 4, 93, 5, 40 ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
null
# bkz11/Phi-3-mini-128k-instruct-Q4_0-GGUF This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo bkz11/Phi-3-mini-128k-instruct-Q4_0-GGUF --model phi-3-mini-128k-instruct.Q4_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo bkz11/Phi-3-mini-128k-instruct-Q4_0-GGUF --model phi-3-mini-128k-instruct.Q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi-3-mini-128k-instruct.Q4_0.gguf -n 128 ```
{"language": ["en"], "license": "mit", "tags": ["nlp", "code", "llama-cpp", "gguf-my-repo"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "widget": [{"messages": [{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}]}]}
bkz11/Phi-3-mini-128k-instruct-Q4_0-GGUF
null
[ "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "en", "license:mit", "region:us" ]
null
2024-05-02T01:36:21+00:00
[]
[ "en" ]
TAGS #gguf #nlp #code #llama-cpp #gguf-my-repo #text-generation #en #license-mit #region-us
# bkz11/Phi-3-mini-128k-instruct-Q4_0-GGUF This model was converted to GGUF format from 'microsoft/Phi-3-mini-128k-instruct' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# bkz11/Phi-3-mini-128k-instruct-Q4_0-GGUF\nThis model was converted to GGUF format from 'microsoft/Phi-3-mini-128k-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #nlp #code #llama-cpp #gguf-my-repo #text-generation #en #license-mit #region-us \n", "# bkz11/Phi-3-mini-128k-instruct-Q4_0-GGUF\nThis model was converted to GGUF format from 'microsoft/Phi-3-mini-128k-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ 39, 82, 52 ]
[ "TAGS\n#gguf #nlp #code #llama-cpp #gguf-my-repo #text-generation #en #license-mit #region-us \n# bkz11/Phi-3-mini-128k-instruct-Q4_0-GGUF\nThis model was converted to GGUF format from 'microsoft/Phi-3-mini-128k-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-classification
setfit
# SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. This model was trained within the context of a larger system for ABSA, which looks like so: 1. Use a spaCy model to select possible aspect span candidates. 2. **Use this SetFit model to filter these possible aspect span candidates.** 3. Use a SetFit model to classify the filtered aspect span candidates. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **spaCy Model:** en_core_web_sm - **SetFitABSA Aspect Model:** [marcelomoreno26/all-MiniLM-L6-v2-absa-aspect2](https://huggingface.co/marcelomoreno26/all-MiniLM-L6-v2-absa-aspect2) - **SetFitABSA Polarity Model:** [setfit-absa-polarity](https://huggingface.co/setfit-absa-polarity) - **Maximum Sequence Length:** 256 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:----------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | aspect | <ul><li>'cord:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'battery life:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'service center:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li></ul> | | no aspect | <ul><li>'night:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'skip:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'exchange:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8948 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import AbsaModel # Download from the 🤗 Hub model = AbsaModel.from_pretrained( "marcelomoreno26/all-MiniLM-L6-v2-absa-aspect2", "setfit-absa-polarity", ) # Run inference preds = model("The food was great, but the venue is just way too busy.") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 2 | 21.9670 | 75 | | Label | Training Sample Count | |:----------|:----------------------| | no aspect | 690 | | aspect | 644 | ### Training Hyperparameters - batch_size: (16, 2) - num_epochs: (1, 16) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:-----:|:-------------:|:---------------:| | 0.0000 | 1 | 0.3662 | - | | 0.0015 | 50 | 0.3374 | - | | 0.0029 | 100 | 0.3411 | - | | 0.0044 | 150 | 0.2945 | - | | 0.0059 | 200 | 0.2944 | - | | 0.0073 | 250 | 0.2942 | - | | 0.0088 | 300 | 0.2409 | - | | 0.0103 | 350 | 0.2817 | - | | 0.0118 | 400 | 0.3149 | - | | 0.0132 | 450 | 0.2618 | - | | 0.0147 | 500 | 0.247 | - | | 0.0162 | 550 | 0.2883 | - | | 0.0176 | 600 | 0.2783 | - | | 0.0191 | 650 | 0.2418 | - | | 0.0206 | 700 | 0.2938 | - | | 0.0220 | 750 | 0.2376 | - | | 0.0235 | 800 | 0.2652 | - | | 0.0250 | 850 | 0.2442 | - | | 0.0265 | 900 | 0.2678 | - | | 0.0279 | 950 | 0.2216 | - | | 0.0294 | 1000 | 0.1816 | - | | 0.0309 | 1050 | 0.1102 | - | | 0.0323 | 1100 | 0.2985 | - | | 0.0338 | 1150 | 0.1124 | - | | 0.0353 | 1200 | 0.1075 | - | | 0.0367 | 1250 | 0.0819 | - | | 0.0382 | 1300 | 0.1238 | - | | 0.0397 | 1350 | 0.0529 | - | | 0.0412 | 1400 | 0.026 | - | | 0.0426 | 1450 | 0.0289 | - | | 0.0441 | 1500 | 0.067 | - | | 0.0456 | 1550 | 0.0276 | - | | 0.0470 | 1600 | 0.0162 | - | | 0.0485 | 1650 | 0.0083 | - | | 0.0500 | 1700 | 0.0017 | - | | 0.0514 | 1750 | 0.0028 | - | | 0.0529 | 1800 | 0.0045 | - | | 0.0544 | 1850 | 0.0022 | - | | 0.0558 | 1900 | 0.0014 | - | | 0.0573 | 1950 | 0.0059 | - | | 0.0588 | 2000 | 0.0019 | - | | 0.0603 | 2050 | 0.0014 | - | | 0.0617 | 2100 | 0.0022 | - | | 0.0632 | 2150 | 0.0005 | - | | 0.0647 | 2200 | 0.0008 | - | | 0.0661 | 2250 | 0.0005 | - | | 0.0676 | 2300 | 0.0006 | - | | 0.0691 | 2350 | 0.0003 | - | | 0.0705 | 2400 | 0.0007 | - | | 0.0720 | 2450 | 0.0005 | - | | 0.0735 | 2500 | 0.0005 | - | | 0.0750 | 2550 | 0.0612 | - | | 0.0764 | 2600 | 0.0004 | - | | 0.0779 | 2650 | 0.041 | - | | 0.0794 | 2700 | 0.0002 | - | | 0.0808 | 2750 | 0.0003 | - | | 0.0823 | 2800 | 0.0002 | - | | 0.0838 | 2850 | 0.0002 | - | | 0.0852 | 2900 | 0.0002 | - | | 0.0867 | 2950 | 0.0004 | - | | 0.0882 | 3000 | 0.0006 | - | | 0.0897 | 3050 | 0.0601 | - | | 0.0911 | 3100 | 0.0002 | - | | 0.0926 | 3150 | 0.0108 | - | | 0.0941 | 3200 | 0.0003 | - | | 0.0955 | 3250 | 0.0363 | - | | 0.0970 | 3300 | 0.0006 | - | | 0.0985 | 3350 | 0.0002 | - | | 0.0999 | 3400 | 0.0033 | - | | 0.1014 | 3450 | 0.0002 | - | | 0.1029 | 3500 | 0.0002 | - | | 0.1044 | 3550 | 0.0006 | - | | 0.1058 | 3600 | 0.0002 | - | | 0.1073 | 3650 | 0.0002 | - | | 0.1088 | 3700 | 0.0001 | - | | 0.1102 | 3750 | 0.0002 | - | | 0.1117 | 3800 | 0.0002 | - | | 0.1132 | 3850 | 0.0004 | - | | 0.1146 | 3900 | 0.0003 | - | | 0.1161 | 3950 | 0.0001 | - | | 0.1176 | 4000 | 0.0004 | - | | 0.1190 | 4050 | 0.0003 | - | | 0.1205 | 4100 | 0.001 | - | | 0.1220 | 4150 | 0.0002 | - | | 0.1235 | 4200 | 0.0001 | - | | 0.1249 | 4250 | 0.0003 | - | | 0.1264 | 4300 | 0.0003 | - | | 0.1279 | 4350 | 0.0002 | - | | 0.1293 | 4400 | 0.0001 | - | | 0.1308 | 4450 | 0.0001 | - | | 0.1323 | 4500 | 0.0001 | - | | 0.1337 | 4550 | 0.0001 | - | | 0.1352 | 4600 | 0.0001 | - | | 0.1367 | 4650 | 0.0003 | - | | 0.1382 | 4700 | 0.0006 | - | | 0.1396 | 4750 | 0.0003 | - | | 0.1411 | 4800 | 0.0001 | - | | 0.1426 | 4850 | 0.0011 | - | | 0.1440 | 4900 | 0.0001 | - | | 0.1455 | 4950 | 0.0001 | - | | 0.1470 | 5000 | 0.0001 | - | | 0.1484 | 5050 | 0.0001 | - | | 0.1499 | 5100 | 0.0002 | - | | 0.1514 | 5150 | 0.0497 | - | | 0.1529 | 5200 | 0.0002 | - | | 0.1543 | 5250 | 0.0001 | - | | 0.1558 | 5300 | 0.0008 | - | | 0.1573 | 5350 | 0.0001 | - | | 0.1587 | 5400 | 0.0002 | - | | 0.1602 | 5450 | 0.0001 | - | | 0.1617 | 5500 | 0.0003 | - | | 0.1631 | 5550 | 0.0003 | - | | 0.1646 | 5600 | 0.0004 | - | | 0.1661 | 5650 | 0.0002 | - | | 0.1675 | 5700 | 0.0002 | - | | 0.1690 | 5750 | 0.0001 | - | | 0.1705 | 5800 | 0.0001 | - | | 0.1720 | 5850 | 0.0001 | - | | 0.1734 | 5900 | 0.0004 | - | | 0.1749 | 5950 | 0.0001 | - | | 0.1764 | 6000 | 0.0001 | - | | 0.1778 | 6050 | 0.0001 | - | | 0.125 | 1 | 0.0002 | - | | 0.5 | 4 | 0.0003 | - | | 1.0 | 8 | 0.0 | - | | 0.0000 | 1 | 0.0001 | - | | 0.0015 | 50 | 0.0001 | - | | 0.0029 | 100 | 0.0 | - | | 0.0044 | 150 | 0.0001 | - | | 0.125 | 1 | 0.0 | - | | 0.5 | 4 | 0.0 | - | | 0.0000 | 1 | 0.0003 | - | | 0.0009 | 50 | 0.0003 | - | | 0.0018 | 100 | 0.0003 | - | | 0.0027 | 150 | 0.0001 | - | | 0.0036 | 200 | 0.0001 | - | | 0.0045 | 250 | 0.1015 | - | | 0.0054 | 300 | 0.0005 | - | | 0.0063 | 350 | 0.0579 | - | | 0.0072 | 400 | 0.0001 | - | | 0.0081 | 450 | 0.0897 | - | | 0.0090 | 500 | 0.0618 | - | | 0.0099 | 550 | 0.0002 | - | | 0.0108 | 600 | 0.0001 | - | | 0.0117 | 650 | 0.0004 | - | | 0.0126 | 700 | 0.0002 | - | | 0.0135 | 750 | 0.0002 | - | | 0.0143 | 800 | 0.0001 | - | | 0.0152 | 850 | 0.062 | - | | 0.0161 | 900 | 0.0004 | - | | 0.0170 | 950 | 0.0002 | - | | 0.0179 | 1000 | 0.0001 | - | | 0.0188 | 1050 | 0.0628 | - | | 0.0197 | 1100 | 0.0003 | - | | 0.0206 | 1150 | 0.0003 | - | | 0.0215 | 1200 | 0.0001 | - | | 0.0224 | 1250 | 0.0001 | - | | 0.0233 | 1300 | 0.0001 | - | | 0.0000 | 1 | 0.0002 | - | | 0.0009 | 50 | 0.0002 | - | | 0.0018 | 100 | 0.0001 | - | | 0.0027 | 150 | 0.0001 | - | | 0.0036 | 200 | 0.0001 | - | | 0.0045 | 250 | 0.0002 | - | | 0.0054 | 300 | 0.0001 | - | | 0.0063 | 350 | 0.0002 | - | | 0.0072 | 400 | 0.0002 | - | | 0.0081 | 450 | 0.0262 | - | | 0.0090 | 500 | 0.0001 | - | | 0.0099 | 550 | 0.0002 | - | | 0.0108 | 600 | 0.0001 | - | | 0.0117 | 650 | 0.0001 | - | | 0.0126 | 700 | 0.0001 | - | | 0.0135 | 750 | 0.0001 | - | | 0.0143 | 800 | 0.0001 | - | | 0.0152 | 850 | 0.0002 | - | | 0.0161 | 900 | 0.0001 | - | | 0.0170 | 950 | 0.0001 | - | | 0.0179 | 1000 | 0.0001 | - | | 0.0188 | 1050 | 0.06 | - | | 0.0197 | 1100 | 0.0001 | - | | 0.0206 | 1150 | 0.0001 | - | | 0.0215 | 1200 | 0.0001 | - | | 0.0224 | 1250 | 0.0001 | - | | 0.0233 | 1300 | 0.0001 | - | | 0.0242 | 1350 | 0.0001 | - | | 0.0251 | 1400 | 0.0001 | - | | 0.0260 | 1450 | 0.0001 | - | | 0.0269 | 1500 | 0.0002 | - | | 0.0278 | 1550 | 0.0001 | - | | 0.0287 | 1600 | 0.0001 | - | | 0.0296 | 1650 | 0.0125 | - | | 0.0305 | 1700 | 0.0001 | - | | 0.0314 | 1750 | 0.0001 | - | | 0.0323 | 1800 | 0.0001 | - | | 0.0332 | 1850 | 0.0001 | - | | 0.0341 | 1900 | 0.0001 | - | | 0.0350 | 1950 | 0.0001 | - | | 0.0359 | 2000 | 0.0001 | - | | 0.0368 | 2050 | 0.0001 | - | | 0.0377 | 2100 | 0.0002 | - | | 0.0386 | 2150 | 0.0001 | - | | 0.0395 | 2200 | 0.0001 | - | | 0.0404 | 2250 | 0.0407 | - | | 0.0412 | 2300 | 0.0001 | - | | 0.0421 | 2350 | 0.0001 | - | | 0.0430 | 2400 | 0.0001 | - | | 0.0439 | 2450 | 0.0001 | - | | 0.0448 | 2500 | 0.0001 | - | | 0.0457 | 2550 | 0.0 | - | | 0.0466 | 2600 | 0.0 | - | | 0.0475 | 2650 | 0.0001 | - | | 0.0484 | 2700 | 0.0 | - | | 0.0493 | 2750 | 0.0001 | - | | 0.0502 | 2800 | 0.0001 | - | | 0.0511 | 2850 | 0.0001 | - | | 0.0520 | 2900 | 0.0001 | - | | 0.0529 | 2950 | 0.0002 | - | | 0.0538 | 3000 | 0.0001 | - | | 0.0547 | 3050 | 0.0001 | - | | 0.0556 | 3100 | 0.0001 | - | | 0.0565 | 3150 | 0.0001 | - | | 0.0574 | 3200 | 0.0 | - | | 0.0583 | 3250 | 0.0 | - | | 0.0592 | 3300 | 0.0 | - | | 0.0601 | 3350 | 0.0001 | - | | 0.0610 | 3400 | 0.0 | - | | 0.0619 | 3450 | 0.0 | - | | 0.0628 | 3500 | 0.0001 | - | | 0.0637 | 3550 | 0.0001 | - | | 0.0646 | 3600 | 0.0 | - | | 0.0655 | 3650 | 0.0001 | - | | 0.0664 | 3700 | 0.0 | - | | 0.0673 | 3750 | 0.0001 | - | | 0.0681 | 3800 | 0.0 | - | | 0.0690 | 3850 | 0.0005 | - | | 0.0699 | 3900 | 0.0001 | - | | 0.0708 | 3950 | 0.0001 | - | | 0.0717 | 4000 | 0.0 | - | | 0.0726 | 4050 | 0.0001 | - | | 0.0735 | 4100 | 0.0009 | - | | 0.0744 | 4150 | 0.0001 | - | | 0.0753 | 4200 | 0.0001 | - | | 0.0762 | 4250 | 0.0001 | - | | 0.0771 | 4300 | 0.0 | - | | 0.0780 | 4350 | 0.0001 | - | | 0.0789 | 4400 | 0.0001 | - | | 0.0798 | 4450 | 0.0001 | - | | 0.0807 | 4500 | 0.0 | - | | 0.0816 | 4550 | 0.0 | - | | 0.0825 | 4600 | 0.0001 | - | | 0.0834 | 4650 | 0.0 | - | | 0.0843 | 4700 | 0.0 | - | | 0.0852 | 4750 | 0.0 | - | | 0.0861 | 4800 | 0.0 | - | | 0.0870 | 4850 | 0.0 | - | | 0.0879 | 4900 | 0.0004 | - | | 0.0888 | 4950 | 0.0002 | - | | 0.0897 | 5000 | 0.0001 | - | | 0.0906 | 5050 | 0.0001 | - | | 0.0915 | 5100 | 0.0 | - | | 0.0924 | 5150 | 0.0026 | - | | 0.0933 | 5200 | 0.0549 | - | | 0.0942 | 5250 | 0.0001 | - | | 0.0950 | 5300 | 0.0011 | - | | 0.0959 | 5350 | 0.0 | - | | 0.0968 | 5400 | 0.0 | - | | 0.0977 | 5450 | 0.0 | - | | 0.0986 | 5500 | 0.0002 | - | | 0.0995 | 5550 | 0.0001 | - | | 0.1004 | 5600 | 0.0 | - | | 0.1013 | 5650 | 0.0001 | - | | 0.1022 | 5700 | 0.0001 | - | | 0.1031 | 5750 | 0.0 | - | | 0.1040 | 5800 | 0.0 | - | | 0.1049 | 5850 | 0.0 | - | | 0.1058 | 5900 | 0.0203 | - | | 0.1067 | 5950 | 0.0001 | - | | 0.1076 | 6000 | 0.0 | - | | 0.1085 | 6050 | 0.0 | - | | 0.1094 | 6100 | 0.0 | - | | 0.1103 | 6150 | 0.0 | - | | 0.1112 | 6200 | 0.0001 | - | | 0.1121 | 6250 | 0.0 | - | | 0.1130 | 6300 | 0.0 | - | | 0.1139 | 6350 | 0.0 | - | | 0.1148 | 6400 | 0.0 | - | | 0.1157 | 6450 | 0.0164 | - | | 0.1166 | 6500 | 0.0001 | - | | 0.1175 | 6550 | 0.0 | - | | 0.1184 | 6600 | 0.0001 | - | | 0.1193 | 6650 | 0.0002 | - | | 0.1202 | 6700 | 0.0001 | - | | 0.1211 | 6750 | 0.0 | - | | 0.1219 | 6800 | 0.0 | - | | 0.1228 | 6850 | 0.0 | - | | 0.1237 | 6900 | 0.0 | - | | 0.1246 | 6950 | 0.0 | - | | 0.1255 | 7000 | 0.0001 | - | | 0.1264 | 7050 | 0.0 | - | | 0.1273 | 7100 | 0.0 | - | | 0.1282 | 7150 | 0.0 | - | | 0.1291 | 7200 | 0.0002 | - | | 0.1300 | 7250 | 0.0 | - | | 0.1309 | 7300 | 0.0 | - | | 0.1318 | 7350 | 0.0 | - | | 0.1327 | 7400 | 0.0 | - | | 0.1336 | 7450 | 0.0 | - | | 0.1345 | 7500 | 0.0002 | - | | 0.1354 | 7550 | 0.0 | - | | 0.1363 | 7600 | 0.0 | - | | 0.1372 | 7650 | 0.0001 | - | | 0.1381 | 7700 | 0.0001 | - | | 0.1390 | 7750 | 0.0001 | - | | 0.1399 | 7800 | 0.0001 | - | | 0.1408 | 7850 | 0.0 | - | | 0.1417 | 7900 | 0.0 | - | | 0.1426 | 7950 | 0.0 | - | | 0.1435 | 8000 | 0.0142 | - | | 0.1444 | 8050 | 0.0001 | - | | 0.1453 | 8100 | 0.0 | - | | 0.1462 | 8150 | 0.0002 | - | | 0.1471 | 8200 | 0.0 | - | | 0.1480 | 8250 | 0.0 | - | | 0.1488 | 8300 | 0.0 | - | | 0.1497 | 8350 | 0.0 | - | | 0.1506 | 8400 | 0.0003 | - | | 0.1515 | 8450 | 0.0 | - | | 0.1524 | 8500 | 0.0 | - | | 0.1533 | 8550 | 0.0 | - | | 0.1542 | 8600 | 0.0 | - | | 0.1551 | 8650 | 0.0 | - | | 0.1560 | 8700 | 0.0 | - | | 0.1569 | 8750 | 0.0 | - | | 0.1578 | 8800 | 0.0 | - | | 0.1587 | 8850 | 0.0 | - | | 0.1596 | 8900 | 0.0 | - | | 0.1605 | 8950 | 0.0 | - | | 0.1614 | 9000 | 0.0 | - | | 0.1623 | 9050 | 0.0 | - | | 0.1632 | 9100 | 0.0 | - | | 0.1641 | 9150 | 0.0 | - | | 0.1650 | 9200 | 0.0 | - | | 0.1659 | 9250 | 0.0001 | - | | 0.1668 | 9300 | 0.0 | - | | 0.1677 | 9350 | 0.0 | - | | 0.1686 | 9400 | 0.0 | - | | 0.1695 | 9450 | 0.0 | - | | 0.1704 | 9500 | 0.0 | - | | 0.1713 | 9550 | 0.0 | - | | 0.1722 | 9600 | 0.0 | - | | 0.1731 | 9650 | 0.0 | - | | 0.1740 | 9700 | 0.0 | - | | 0.1749 | 9750 | 0.0 | - | | 0.1758 | 9800 | 0.0 | - | | 0.1766 | 9850 | 0.0 | - | | 0.1775 | 9900 | 0.0 | - | | 0.1784 | 9950 | 0.0 | - | | 0.1793 | 10000 | 0.0 | - | | 0.1802 | 10050 | 0.0097 | - | | 0.1811 | 10100 | 0.0 | - | | 0.1820 | 10150 | 0.0 | - | | 0.1829 | 10200 | 0.0 | - | | 0.1838 | 10250 | 0.0 | - | | 0.1847 | 10300 | 0.0001 | - | | 0.1856 | 10350 | 0.0 | - | | 0.1865 | 10400 | 0.0 | - | | 0.1874 | 10450 | 0.0 | - | | 0.1883 | 10500 | 0.0 | - | | 0.1892 | 10550 | 0.0 | - | | 0.1901 | 10600 | 0.0 | - | | 0.1910 | 10650 | 0.0 | - | | 0.1919 | 10700 | 0.0 | - | | 0.1928 | 10750 | 0.0 | - | | 0.1937 | 10800 | 0.0 | - | | 0.1946 | 10850 | 0.0 | - | | 0.1955 | 10900 | 0.0 | - | | 0.1964 | 10950 | 0.0 | - | | 0.1973 | 11000 | 0.0001 | - | | 0.1982 | 11050 | 0.0 | - | | 0.1991 | 11100 | 0.0 | - | | 0.2000 | 11150 | 0.0 | - | | 0.2009 | 11200 | 0.0 | - | | 0.2018 | 11250 | 0.0004 | - | | 0.2027 | 11300 | 0.0001 | - | | 0.2035 | 11350 | 0.0001 | - | | 0.2044 | 11400 | 0.0 | - | | 0.2053 | 11450 | 0.0001 | - | | 0.2062 | 11500 | 0.0 | - | | 0.2071 | 11550 | 0.0001 | - | | 0.2080 | 11600 | 0.0 | - | | 0.2089 | 11650 | 0.0 | - | | 0.2098 | 11700 | 0.0 | - | | 0.2107 | 11750 | 0.0 | - | | 0.2116 | 11800 | 0.0 | - | | 0.2125 | 11850 | 0.0 | - | | 0.2134 | 11900 | 0.0 | - | | 0.2143 | 11950 | 0.0001 | - | | 0.2152 | 12000 | 0.0 | - | | 0.2161 | 12050 | 0.0 | - | | 0.2170 | 12100 | 0.0 | - | | 0.2179 | 12150 | 0.0 | - | | 0.2188 | 12200 | 0.0 | - | | 0.2197 | 12250 | 0.0 | - | | 0.2206 | 12300 | 0.0 | - | | 0.2215 | 12350 | 0.0 | - | | 0.2224 | 12400 | 0.0 | - | | 0.2233 | 12450 | 0.0 | - | | 0.2242 | 12500 | 0.0 | - | | 0.2251 | 12550 | 0.0 | - | | 0.2260 | 12600 | 0.0 | - | | 0.2269 | 12650 | 0.0 | - | | 0.2278 | 12700 | 0.0 | - | | 0.2287 | 12750 | 0.0 | - | | 0.2296 | 12800 | 0.0 | - | | 0.2304 | 12850 | 0.0 | - | | 0.2313 | 12900 | 0.0 | - | | 0.2322 | 12950 | 0.0 | - | | 0.2331 | 13000 | 0.0 | - | | 0.2340 | 13050 | 0.0 | - | | 0.2349 | 13100 | 0.0 | - | | 0.2358 | 13150 | 0.0264 | - | | 0.2367 | 13200 | 0.0 | - | | 0.2376 | 13250 | 0.0 | - | | 0.2385 | 13300 | 0.0 | - | | 0.2394 | 13350 | 0.0 | - | | 0.2403 | 13400 | 0.0 | - | | 0.2412 | 13450 | 0.0 | - | | 0.2421 | 13500 | 0.0 | - | | 0.2430 | 13550 | 0.0 | - | | 0.2439 | 13600 | 0.0 | - | | 0.2448 | 13650 | 0.0 | - | | 0.2457 | 13700 | 0.0 | - | | 0.2466 | 13750 | 0.0 | - | | 0.2475 | 13800 | 0.0 | - | | 0.2484 | 13850 | 0.0 | - | | 0.2493 | 13900 | 0.0 | - | | 0.2502 | 13950 | 0.0 | - | | 0.2511 | 14000 | 0.0 | - | | 0.2520 | 14050 | 0.0 | - | | 0.2529 | 14100 | 0.0 | - | | 0.2538 | 14150 | 0.0001 | - | | 0.2547 | 14200 | 0.0 | - | | 0.2556 | 14250 | 0.0 | - | | 0.2565 | 14300 | 0.0 | - | | 0.2573 | 14350 | 0.0 | - | | 0.2582 | 14400 | 0.0 | - | | 0.2591 | 14450 | 0.0 | - | | 0.2600 | 14500 | 0.0 | - | | 0.2609 | 14550 | 0.0001 | - | | 0.2618 | 14600 | 0.0 | - | | 0.2627 | 14650 | 0.0 | - | | 0.2636 | 14700 | 0.0 | - | | 0.2645 | 14750 | 0.0 | - | | 0.2654 | 14800 | 0.0 | - | | 0.2663 | 14850 | 0.0 | - | | 0.2672 | 14900 | 0.0 | - | | 0.2681 | 14950 | 0.0001 | - | | 0.2690 | 15000 | 0.0 | - | | 0.2699 | 15050 | 0.0 | - | | 0.2708 | 15100 | 0.0 | - | | 0.2717 | 15150 | 0.0 | - | | 0.2726 | 15200 | 0.0 | - | | 0.2735 | 15250 | 0.0 | - | | 0.2744 | 15300 | 0.0 | - | | 0.2753 | 15350 | 0.0 | - | | 0.2762 | 15400 | 0.0 | - | | 0.2771 | 15450 | 0.0 | - | | 0.2780 | 15500 | 0.0001 | - | | 0.2789 | 15550 | 0.0621 | - | | 0.2798 | 15600 | 0.0056 | - | | 0.2807 | 15650 | 0.0 | - | | 0.2816 | 15700 | 0.0 | - | | 0.2825 | 15750 | 0.0145 | - | | 0.2834 | 15800 | 0.0 | - | | 0.2842 | 15850 | 0.0 | - | | 0.2851 | 15900 | 0.0166 | - | | 0.2860 | 15950 | 0.0 | - | | 0.2869 | 16000 | 0.0 | - | | 0.2878 | 16050 | 0.0 | - | | 0.2887 | 16100 | 0.0166 | - | | 0.2896 | 16150 | 0.0 | - | | 0.2905 | 16200 | 0.0 | - | | 0.2914 | 16250 | 0.0169 | - | | 0.2923 | 16300 | 0.0 | - | | 0.2932 | 16350 | 0.0 | - | | 0.2941 | 16400 | 0.0 | - | | 0.2950 | 16450 | 0.0 | - | | 0.2959 | 16500 | 0.0 | - | | 0.2968 | 16550 | 0.0 | - | | 0.2977 | 16600 | 0.0 | - | | 0.2986 | 16650 | 0.0 | - | | 0.2995 | 16700 | 0.0 | - | | 0.3004 | 16750 | 0.0 | - | | 0.3013 | 16800 | 0.0 | - | | 0.3022 | 16850 | 0.0 | - | | 0.3031 | 16900 | 0.0 | - | | 0.3040 | 16950 | 0.0 | - | | 0.3049 | 17000 | 0.0 | - | | 0.3058 | 17050 | 0.0138 | - | | 0.3067 | 17100 | 0.0 | - | | 0.3076 | 17150 | 0.0 | - | | 0.3085 | 17200 | 0.0 | - | | 0.3094 | 17250 | 0.0 | - | | 0.3103 | 17300 | 0.0 | - | | 0.3111 | 17350 | 0.0 | - | | 0.3120 | 17400 | 0.0 | - | | 0.3129 | 17450 | 0.0001 | - | | 0.3138 | 17500 | 0.0 | - | | 0.3147 | 17550 | 0.0 | - | | 0.3156 | 17600 | 0.0 | - | | 0.3165 | 17650 | 0.0 | - | | 0.3174 | 17700 | 0.0 | - | | 0.3183 | 17750 | 0.0 | - | | 0.3192 | 17800 | 0.0 | - | | 0.3201 | 17850 | 0.0 | - | | 0.3210 | 17900 | 0.0 | - | | 0.3219 | 17950 | 0.0001 | - | | 0.3228 | 18000 | 0.0 | - | | 0.3237 | 18050 | 0.0 | - | | 0.3246 | 18100 | 0.0 | - | | 0.3255 | 18150 | 0.0 | - | | 0.3264 | 18200 | 0.0 | - | | 0.3273 | 18250 | 0.0 | - | | 0.3282 | 18300 | 0.0 | - | | 0.3291 | 18350 | 0.0 | - | | 0.3300 | 18400 | 0.0 | - | | 0.3309 | 18450 | 0.0003 | - | | 0.3318 | 18500 | 0.0 | - | | 0.3327 | 18550 | 0.0 | - | | 0.3336 | 18600 | 0.0 | - | | 0.3345 | 18650 | 0.0 | - | | 0.3354 | 18700 | 0.0 | - | | 0.3363 | 18750 | 0.0 | - | | 0.3372 | 18800 | 0.0 | - | | 0.3380 | 18850 | 0.0 | - | | 0.3389 | 18900 | 0.0 | - | | 0.3398 | 18950 | 0.0 | - | | 0.3407 | 19000 | 0.0 | - | | 0.3416 | 19050 | 0.0 | - | | 0.3425 | 19100 | 0.0 | - | | 0.3434 | 19150 | 0.0 | - | | 0.3443 | 19200 | 0.0 | - | | 0.3452 | 19250 | 0.0 | - | | 0.3461 | 19300 | 0.0 | - | | 0.3470 | 19350 | 0.0 | - | | 0.3479 | 19400 | 0.0 | - | | 0.3488 | 19450 | 0.0 | - | | 0.3497 | 19500 | 0.0001 | - | | 0.3506 | 19550 | 0.0131 | - | | 0.3515 | 19600 | 0.0 | - | | 0.3524 | 19650 | 0.0 | - | | 0.3533 | 19700 | 0.0 | - | | 0.3542 | 19750 | 0.0 | - | | 0.3551 | 19800 | 0.0 | - | | 0.3560 | 19850 | 0.0 | - | | 0.3569 | 19900 | 0.0 | - | | 0.3578 | 19950 | 0.0 | - | | 0.3587 | 20000 | 0.0 | - | | 0.3596 | 20050 | 0.0 | - | | 0.3605 | 20100 | 0.0 | - | | 0.3614 | 20150 | 0.0 | - | | 0.3623 | 20200 | 0.0208 | - | | 0.3632 | 20250 | 0.0 | - | | 0.3641 | 20300 | 0.0 | - | | 0.3650 | 20350 | 0.0 | - | | 0.3658 | 20400 | 0.0 | - | | 0.3667 | 20450 | 0.0 | - | | 0.3676 | 20500 | 0.0 | - | | 0.3685 | 20550 | 0.0 | - | | 0.3694 | 20600 | 0.0 | - | | 0.3703 | 20650 | 0.0 | - | | 0.3712 | 20700 | 0.0 | - | | 0.3721 | 20750 | 0.0 | - | | 0.3730 | 20800 | 0.0 | - | | 0.3739 | 20850 | 0.0 | - | | 0.3748 | 20900 | 0.0 | - | | 0.3757 | 20950 | 0.0 | - | | 0.3766 | 21000 | 0.0 | - | | 0.3775 | 21050 | 0.0 | - | | 0.3784 | 21100 | 0.0 | - | | 0.3793 | 21150 | 0.0 | - | | 0.3802 | 21200 | 0.0 | - | | 0.3811 | 21250 | 0.0 | - | | 0.3820 | 21300 | 0.0 | - | | 0.3829 | 21350 | 0.0 | - | | 0.3838 | 21400 | 0.0 | - | | 0.3847 | 21450 | 0.0 | - | | 0.3856 | 21500 | 0.0 | - | | 0.3865 | 21550 | 0.0 | - | | 0.3874 | 21600 | 0.0 | - | | 0.3883 | 21650 | 0.0 | - | | 0.3892 | 21700 | 0.0 | - | | 0.3901 | 21750 | 0.0 | - | | 0.3910 | 21800 | 0.0 | - | | 0.3919 | 21850 | 0.0001 | - | | 0.3927 | 21900 | 0.0 | - | | 0.3936 | 21950 | 0.0 | - | | 0.3945 | 22000 | 0.0 | - | | 0.3954 | 22050 | 0.0 | - | | 0.3963 | 22100 | 0.0 | - | | 0.3972 | 22150 | 0.0 | - | | 0.3981 | 22200 | 0.0 | - | | 0.3990 | 22250 | 0.0 | - | | 0.3999 | 22300 | 0.0 | - | | 0.4008 | 22350 | 0.0 | - | | 0.4017 | 22400 | 0.0 | - | | 0.4026 | 22450 | 0.0 | - | | 0.4035 | 22500 | 0.0 | - | | 0.4044 | 22550 | 0.0 | - | | 0.4053 | 22600 | 0.0217 | - | | 0.4062 | 22650 | 0.0 | - | | 0.4071 | 22700 | 0.0 | - | | 0.4080 | 22750 | 0.0 | - | | 0.4089 | 22800 | 0.0 | - | | 0.4098 | 22850 | 0.0 | - | | 0.4107 | 22900 | 0.0 | - | | 0.4116 | 22950 | 0.0 | - | | 0.4125 | 23000 | 0.0 | - | | 0.4134 | 23050 | 0.0 | - | | 0.4143 | 23100 | 0.0 | - | | 0.4152 | 23150 | 0.0 | - | | 0.4161 | 23200 | 0.0 | - | | 0.4170 | 23250 | 0.0 | - | | 0.4179 | 23300 | 0.0 | - | | 0.4188 | 23350 | 0.0 | - | | 0.4196 | 23400 | 0.0 | - | | 0.4205 | 23450 | 0.0 | - | | 0.4214 | 23500 | 0.0 | - | | 0.4223 | 23550 | 0.0 | - | | 0.4232 | 23600 | 0.0 | - | | 0.4241 | 23650 | 0.0 | - | | 0.4250 | 23700 | 0.0 | - | | 0.4259 | 23750 | 0.0 | - | | 0.4268 | 23800 | 0.0 | - | | 0.4277 | 23850 | 0.0 | - | | 0.4286 | 23900 | 0.0098 | - | | 0.4295 | 23950 | 0.0 | - | | 0.4304 | 24000 | 0.0 | - | | 0.4313 | 24050 | 0.0 | - | | 0.4322 | 24100 | 0.0 | - | | 0.4331 | 24150 | 0.0 | - | | 0.4340 | 24200 | 0.0 | - | | 0.4349 | 24250 | 0.0 | - | | 0.4358 | 24300 | 0.0089 | - | | 0.4367 | 24350 | 0.0 | - | | 0.4376 | 24400 | 0.0 | - | | 0.4385 | 24450 | 0.0 | - | | 0.4394 | 24500 | 0.0 | - | | 0.4403 | 24550 | 0.0 | - | | 0.4412 | 24600 | 0.0092 | - | | 0.4421 | 24650 | 0.0003 | - | | 0.4430 | 24700 | 0.0283 | - | | 0.4439 | 24750 | 0.0 | - | | 0.4448 | 24800 | 0.0 | - | | 0.4457 | 24850 | 0.0 | - | | 0.4465 | 24900 | 0.0 | - | | 0.4474 | 24950 | 0.0 | - | | 0.4483 | 25000 | 0.0 | - | | 0.4492 | 25050 | 0.0 | - | | 0.4501 | 25100 | 0.0 | - | | 0.4510 | 25150 | 0.0002 | - | | 0.4519 | 25200 | 0.0016 | - | | 0.4528 | 25250 | 0.0 | - | | 0.4537 | 25300 | 0.0 | - | | 0.4546 | 25350 | 0.0 | - | | 0.4555 | 25400 | 0.0 | - | | 0.4564 | 25450 | 0.0 | - | | 0.4573 | 25500 | 0.0 | - | | 0.4582 | 25550 | 0.0 | - | | 0.4591 | 25600 | 0.0 | - | | 0.4600 | 25650 | 0.0171 | - | | 0.4609 | 25700 | 0.0 | - | | 0.4618 | 25750 | 0.0 | - | | 0.4627 | 25800 | 0.0161 | - | | 0.4636 | 25850 | 0.0 | - | | 0.4645 | 25900 | 0.0 | - | | 0.4654 | 25950 | 0.0 | - | | 0.4663 | 26000 | 0.0 | - | | 0.4672 | 26050 | 0.0078 | - | | 0.4681 | 26100 | 0.0 | - | | 0.4690 | 26150 | 0.0 | - | | 0.4699 | 26200 | 0.0 | - | | 0.4708 | 26250 | 0.0 | - | | 0.4717 | 26300 | 0.0 | - | | 0.4726 | 26350 | 0.0 | - | | 0.4734 | 26400 | 0.0 | - | | 0.4743 | 26450 | 0.0 | - | | 0.4752 | 26500 | 0.0091 | - | | 0.4761 | 26550 | 0.0 | - | | 0.4770 | 26600 | 0.0 | - | | 0.4779 | 26650 | 0.0 | - | | 0.4788 | 26700 | 0.0 | - | | 0.4797 | 26750 | 0.0 | - | | 0.4806 | 26800 | 0.0 | - | | 0.4815 | 26850 | 0.0 | - | | 0.4824 | 26900 | 0.0 | - | | 0.4833 | 26950 | 0.0 | - | | 0.4842 | 27000 | 0.0 | - | | 0.4851 | 27050 | 0.0 | - | | 0.4860 | 27100 | 0.0 | - | | 0.4869 | 27150 | 0.0 | - | | 0.4878 | 27200 | 0.0 | - | | 0.4887 | 27250 | 0.0 | - | | 0.4896 | 27300 | 0.0 | - | | 0.4905 | 27350 | 0.0 | - | | 0.4914 | 27400 | 0.0 | - | | 0.4923 | 27450 | 0.0 | - | | 0.4932 | 27500 | 0.0 | - | | 0.4941 | 27550 | 0.0 | - | | 0.4950 | 27600 | 0.0 | - | | 0.4959 | 27650 | 0.0 | - | | 0.4968 | 27700 | 0.0 | - | | 0.4977 | 27750 | 0.0 | - | | 0.4986 | 27800 | 0.0 | - | | 0.4995 | 27850 | 0.0 | - | | 0.5003 | 27900 | 0.0273 | - | | 0.5012 | 27950 | 0.0 | - | | 0.5021 | 28000 | 0.0 | - | | 0.5030 | 28050 | 0.0 | - | | 0.5039 | 28100 | 0.0 | - | | 0.5048 | 28150 | 0.0 | - | | 0.5057 | 28200 | 0.0 | - | | 0.5066 | 28250 | 0.0 | - | | 0.5075 | 28300 | 0.0 | - | | 0.5084 | 28350 | 0.0 | - | | 0.5093 | 28400 | 0.0 | - | | 0.5102 | 28450 | 0.0 | - | | 0.5111 | 28500 | 0.0 | - | | 0.5120 | 28550 | 0.0 | - | | 0.5129 | 28600 | 0.0 | - | | 0.5138 | 28650 | 0.0 | - | | 0.5147 | 28700 | 0.0 | - | | 0.5156 | 28750 | 0.0 | - | | 0.5165 | 28800 | 0.0 | - | | 0.5174 | 28850 | 0.0 | - | | 0.5183 | 28900 | 0.0 | - | | 0.5192 | 28950 | 0.017 | - | | 0.5201 | 29000 | 0.0 | - | | 0.5210 | 29050 | 0.0 | - | | 0.5219 | 29100 | 0.0 | - | | 0.5228 | 29150 | 0.0 | - | | 0.5237 | 29200 | 0.0 | - | | 0.5246 | 29250 | 0.0 | - | | 0.5255 | 29300 | 0.0 | - | | 0.5264 | 29350 | 0.0 | - | | 0.5273 | 29400 | 0.0 | - | | 0.5281 | 29450 | 0.0 | - | | 0.5290 | 29500 | 0.0211 | - | | 0.5299 | 29550 | 0.0 | - | | 0.5308 | 29600 | 0.0 | - | | 0.5317 | 29650 | 0.0 | - | | 0.5326 | 29700 | 0.0 | - | | 0.5335 | 29750 | 0.0 | - | | 0.5344 | 29800 | 0.0 | - | | 0.5353 | 29850 | 0.0 | - | | 0.5362 | 29900 | 0.0 | - | | 0.5371 | 29950 | 0.0 | - | | 0.5380 | 30000 | 0.0 | - | | 0.5389 | 30050 | 0.0002 | - | | 0.5398 | 30100 | 0.0 | - | | 0.5407 | 30150 | 0.0 | - | | 0.5416 | 30200 | 0.0 | - | | 0.5425 | 30250 | 0.0 | - | | 0.5434 | 30300 | 0.0 | - | | 0.5443 | 30350 | 0.0 | - | | 0.5452 | 30400 | 0.0 | - | | 0.5461 | 30450 | 0.0 | - | | 0.5470 | 30500 | 0.0158 | - | | 0.5479 | 30550 | 0.0 | - | | 0.5488 | 30600 | 0.0 | - | | 0.5497 | 30650 | 0.0 | - | | 0.5506 | 30700 | 0.0 | - | | 0.5515 | 30750 | 0.0165 | - | | 0.5524 | 30800 | 0.0 | - | | 0.5533 | 30850 | 0.0 | - | | 0.5542 | 30900 | 0.0 | - | | 0.5550 | 30950 | 0.0 | - | | 0.5559 | 31000 | 0.0 | - | | 0.5568 | 31050 | 0.0 | - | | 0.5577 | 31100 | 0.0 | - | | 0.5586 | 31150 | 0.0132 | - | | 0.5595 | 31200 | 0.0 | - | | 0.5604 | 31250 | 0.0 | - | | 0.5613 | 31300 | 0.0 | - | | 0.5622 | 31350 | 0.0 | - | | 0.5631 | 31400 | 0.0 | - | | 0.5640 | 31450 | 0.0 | - | | 0.5649 | 31500 | 0.0 | - | | 0.5658 | 31550 | 0.0 | - | | 0.5667 | 31600 | 0.0 | - | | 0.5676 | 31650 | 0.0 | - | | 0.5685 | 31700 | 0.0 | - | | 0.5694 | 31750 | 0.0 | - | | 0.5703 | 31800 | 0.0 | - | | 0.5712 | 31850 | 0.0 | - | | 0.5721 | 31900 | 0.0 | - | | 0.5730 | 31950 | 0.0185 | - | | 0.5739 | 32000 | 0.0 | - | | 0.5748 | 32050 | 0.0 | - | | 0.5757 | 32100 | 0.0 | - | | 0.5766 | 32150 | 0.0 | - | | 0.5775 | 32200 | 0.0 | - | | 0.5784 | 32250 | 0.0 | - | | 0.5793 | 32300 | 0.0 | - | | 0.5802 | 32350 | 0.0 | - | | 0.5811 | 32400 | 0.0 | - | | 0.5819 | 32450 | 0.0 | - | | 0.5828 | 32500 | 0.0 | - | | 0.5837 | 32550 | 0.0 | - | | 0.5846 | 32600 | 0.0 | - | | 0.5855 | 32650 | 0.0 | - | | 0.5864 | 32700 | 0.0 | - | | 0.5873 | 32750 | 0.0 | - | | 0.5882 | 32800 | 0.0 | - | | 0.5891 | 32850 | 0.0 | - | | 0.5900 | 32900 | 0.0 | - | | 0.5909 | 32950 | 0.0 | - | | 0.5918 | 33000 | 0.0 | - | | 0.5927 | 33050 | 0.0 | - | | 0.5936 | 33100 | 0.0 | - | | 0.5945 | 33150 | 0.0 | - | | 0.5954 | 33200 | 0.0 | - | | 0.5963 | 33250 | 0.0 | - | | 0.5972 | 33300 | 0.0 | - | | 0.5981 | 33350 | 0.0 | - | | 0.5990 | 33400 | 0.0 | - | | 0.5999 | 33450 | 0.0 | - | | 0.6008 | 33500 | 0.0 | - | | 0.6017 | 33550 | 0.0 | - | | 0.6026 | 33600 | 0.0 | - | | 0.6035 | 33650 | 0.0 | - | | 0.6044 | 33700 | 0.0 | - | | 0.6053 | 33750 | 0.0 | - | | 0.6062 | 33800 | 0.0 | - | | 0.6071 | 33850 | 0.0 | - | | 0.6080 | 33900 | 0.0 | - | | 0.6088 | 33950 | 0.0 | - | | 0.6097 | 34000 | 0.0 | - | | 0.6106 | 34050 | 0.0 | - | | 0.6115 | 34100 | 0.0 | - | | 0.6124 | 34150 | 0.0 | - | | 0.6133 | 34200 | 0.0 | - | | 0.6142 | 34250 | 0.0 | - | | 0.6151 | 34300 | 0.0 | - | | 0.6160 | 34350 | 0.0 | - | | 0.6169 | 34400 | 0.0 | - | | 0.6178 | 34450 | 0.0 | - | | 0.6187 | 34500 | 0.0 | - | | 0.6196 | 34550 | 0.0 | - | | 0.6205 | 34600 | 0.0 | - | | 0.6214 | 34650 | 0.0 | - | | 0.6223 | 34700 | 0.0 | - | | 0.6232 | 34750 | 0.0 | - | | 0.6241 | 34800 | 0.0 | - | | 0.6250 | 34850 | 0.0 | - | | 0.6259 | 34900 | 0.0174 | - | | 0.6268 | 34950 | 0.0 | - | | 0.6277 | 35000 | 0.0 | - | | 0.6286 | 35050 | 0.0 | - | | 0.6295 | 35100 | 0.0173 | - | | 0.6304 | 35150 | 0.0 | - | | 0.6313 | 35200 | 0.0 | - | | 0.6322 | 35250 | 0.0 | - | | 0.6331 | 35300 | 0.0 | - | | 0.6340 | 35350 | 0.0 | - | | 0.6349 | 35400 | 0.0 | - | | 0.6357 | 35450 | 0.0 | - | | 0.6366 | 35500 | 0.0 | - | | 0.6375 | 35550 | 0.0 | - | | 0.6384 | 35600 | 0.0 | - | | 0.6393 | 35650 | 0.0 | - | | 0.6402 | 35700 | 0.0 | - | | 0.6411 | 35750 | 0.0 | - | | 0.6420 | 35800 | 0.0 | - | | 0.6429 | 35850 | 0.0 | - | | 0.6438 | 35900 | 0.0 | - | | 0.6447 | 35950 | 0.0 | - | | 0.6456 | 36000 | 0.0 | - | | 0.6465 | 36050 | 0.0 | - | | 0.6474 | 36100 | 0.0 | - | | 0.6483 | 36150 | 0.0 | - | | 0.6492 | 36200 | 0.0 | - | | 0.6501 | 36250 | 0.0 | - | | 0.6510 | 36300 | 0.0115 | - | | 0.6519 | 36350 | 0.0 | - | | 0.6528 | 36400 | 0.0 | - | | 0.6537 | 36450 | 0.0 | - | | 0.6546 | 36500 | 0.0 | - | | 0.6555 | 36550 | 0.0 | - | | 0.6564 | 36600 | 0.0204 | - | | 0.6573 | 36650 | 0.0 | - | | 0.6582 | 36700 | 0.0125 | - | | 0.6591 | 36750 | 0.0 | - | | 0.6600 | 36800 | 0.0 | - | | 0.6609 | 36850 | 0.0 | - | | 0.6618 | 36900 | 0.0 | - | | 0.6626 | 36950 | 0.0 | - | | 0.6635 | 37000 | 0.0 | - | | 0.6644 | 37050 | 0.0 | - | | 0.6653 | 37100 | 0.0109 | - | | 0.6662 | 37150 | 0.0 | - | | 0.6671 | 37200 | 0.0 | - | | 0.6680 | 37250 | 0.0 | - | | 0.6689 | 37300 | 0.0 | - | | 0.6698 | 37350 | 0.0 | - | | 0.6707 | 37400 | 0.0 | - | | 0.6716 | 37450 | 0.0 | - | | 0.6725 | 37500 | 0.0 | - | | 0.6734 | 37550 | 0.0 | - | | 0.6743 | 37600 | 0.0 | - | | 0.6752 | 37650 | 0.0 | - | | 0.6761 | 37700 | 0.0 | - | | 0.6770 | 37750 | 0.0 | - | | 0.6779 | 37800 | 0.0 | - | | 0.6788 | 37850 | 0.0 | - | | 0.6797 | 37900 | 0.0 | - | | 0.6806 | 37950 | 0.0 | - | | 0.6815 | 38000 | 0.0 | - | | 0.6824 | 38050 | 0.0 | - | | 0.6833 | 38100 | 0.0 | - | | 0.6842 | 38150 | 0.0 | - | | 0.6851 | 38200 | 0.0 | - | | 0.6860 | 38250 | 0.0 | - | | 0.6869 | 38300 | 0.0 | - | | 0.6878 | 38350 | 0.0 | - | | 0.6887 | 38400 | 0.0 | - | | 0.6896 | 38450 | 0.0 | - | | 0.6904 | 38500 | 0.0 | - | | 0.6913 | 38550 | 0.0 | - | | 0.6922 | 38600 | 0.0 | - | | 0.6931 | 38650 | 0.0 | - | | 0.6940 | 38700 | 0.0 | - | | 0.6949 | 38750 | 0.0 | - | | 0.6958 | 38800 | 0.0 | - | | 0.6967 | 38850 | 0.0 | - | | 0.6976 | 38900 | 0.0 | - | | 0.6985 | 38950 | 0.0 | - | | 0.6994 | 39000 | 0.0 | - | | 0.7003 | 39050 | 0.0 | - | | 0.7012 | 39100 | 0.0 | - | | 0.7021 | 39150 | 0.0 | - | | 0.7030 | 39200 | 0.0 | - | | 0.7039 | 39250 | 0.0 | - | | 0.7048 | 39300 | 0.0 | - | | 0.7057 | 39350 | 0.0 | - | | 0.7066 | 39400 | 0.0 | - | | 0.7075 | 39450 | 0.0 | - | | 0.7084 | 39500 | 0.0 | - | | 0.7093 | 39550 | 0.0 | - | | 0.7102 | 39600 | 0.0 | - | | 0.7111 | 39650 | 0.0 | - | | 0.7120 | 39700 | 0.0 | - | | 0.7129 | 39750 | 0.0 | - | | 0.7138 | 39800 | 0.0 | - | | 0.7147 | 39850 | 0.0 | - | | 0.7156 | 39900 | 0.0 | - | | 0.7165 | 39950 | 0.0 | - | | 0.7173 | 40000 | 0.0 | - | | 0.7182 | 40050 | 0.0 | - | | 0.7191 | 40100 | 0.0 | - | | 0.7200 | 40150 | 0.0 | - | | 0.7209 | 40200 | 0.0 | - | | 0.7218 | 40250 | 0.0 | - | | 0.7227 | 40300 | 0.0 | - | | 0.7236 | 40350 | 0.0 | - | | 0.7245 | 40400 | 0.0 | - | | 0.7254 | 40450 | 0.0 | - | | 0.7263 | 40500 | 0.0 | - | | 0.7272 | 40550 | 0.0 | - | | 0.7281 | 40600 | 0.0 | - | | 0.7290 | 40650 | 0.0 | - | | 0.7299 | 40700 | 0.0 | - | | 0.7308 | 40750 | 0.0 | - | | 0.7317 | 40800 | 0.0 | - | | 0.7326 | 40850 | 0.0 | - | | 0.7335 | 40900 | 0.0 | - | | 0.7344 | 40950 | 0.0 | - | | 0.7353 | 41000 | 0.0 | - | | 0.7362 | 41050 | 0.0 | - | | 0.7371 | 41100 | 0.0 | - | | 0.7380 | 41150 | 0.0153 | - | | 0.7389 | 41200 | 0.0 | - | | 0.7398 | 41250 | 0.0 | - | | 0.7407 | 41300 | 0.0 | - | | 0.7416 | 41350 | 0.0 | - | | 0.7425 | 41400 | 0.0 | - | | 0.7434 | 41450 | 0.0 | - | | 0.7442 | 41500 | 0.0 | - | | 0.7451 | 41550 | 0.0 | - | | 0.7460 | 41600 | 0.0 | - | | 0.7469 | 41650 | 0.0 | - | | 0.7478 | 41700 | 0.0 | - | | 0.7487 | 41750 | 0.0001 | - | | 0.7496 | 41800 | 0.0 | - | | 0.7505 | 41850 | 0.0 | - | | 0.7514 | 41900 | 0.0 | - | | 0.7523 | 41950 | 0.0 | - | | 0.7532 | 42000 | 0.0 | - | | 0.7541 | 42050 | 0.0 | - | | 0.7550 | 42100 | 0.0155 | - | | 0.7559 | 42150 | 0.0231 | - | | 0.7568 | 42200 | 0.0 | - | | 0.7577 | 42250 | 0.0 | - | | 0.7586 | 42300 | 0.0 | - | | 0.7595 | 42350 | 0.0172 | - | | 0.7604 | 42400 | 0.0169 | - | | 0.7613 | 42450 | 0.0 | - | | 0.7622 | 42500 | 0.0 | - | | 0.7631 | 42550 | 0.0157 | - | | 0.7640 | 42600 | 0.0 | - | | 0.7649 | 42650 | 0.0 | - | | 0.7658 | 42700 | 0.0 | - | | 0.7667 | 42750 | 0.0 | - | | 0.7676 | 42800 | 0.0 | - | | 0.7685 | 42850 | 0.0 | - | | 0.7694 | 42900 | 0.0 | - | | 0.7703 | 42950 | 0.0208 | - | | 0.7711 | 43000 | 0.0 | - | | 0.7720 | 43050 | 0.0 | - | | 0.7729 | 43100 | 0.0 | - | | 0.7738 | 43150 | 0.0 | - | | 0.7747 | 43200 | 0.0 | - | | 0.7756 | 43250 | 0.0 | - | | 0.7765 | 43300 | 0.0 | - | | 0.7774 | 43350 | 0.0 | - | | 0.7783 | 43400 | 0.0 | - | | 0.7792 | 43450 | 0.0 | - | | 0.7801 | 43500 | 0.0 | - | | 0.7810 | 43550 | 0.0 | - | | 0.7819 | 43600 | 0.0 | - | | 0.7828 | 43650 | 0.0 | - | | 0.7837 | 43700 | 0.0 | - | | 0.7846 | 43750 | 0.0 | - | | 0.7855 | 43800 | 0.0 | - | | 0.7864 | 43850 | 0.0 | - | | 0.7873 | 43900 | 0.0 | - | | 0.7882 | 43950 | 0.0 | - | | 0.7891 | 44000 | 0.0 | - | | 0.7900 | 44050 | 0.0 | - | | 0.7909 | 44100 | 0.0 | - | | 0.7918 | 44150 | 0.0 | - | | 0.7927 | 44200 | 0.0 | - | | 0.7936 | 44250 | 0.0 | - | | 0.7945 | 44300 | 0.0 | - | | 0.7954 | 44350 | 0.0 | - | | 0.7963 | 44400 | 0.0 | - | | 0.7972 | 44450 | 0.0 | - | | 0.7980 | 44500 | 0.0 | - | | 0.7989 | 44550 | 0.0 | - | | 0.7998 | 44600 | 0.0 | - | | 0.8007 | 44650 | 0.0 | - | | 0.8016 | 44700 | 0.0 | - | | 0.8025 | 44750 | 0.0 | - | | 0.8034 | 44800 | 0.0 | - | | 0.8043 | 44850 | 0.0 | - | | 0.8052 | 44900 | 0.0 | - | | 0.8061 | 44950 | 0.0108 | - | | 0.8070 | 45000 | 0.0 | - | | 0.8079 | 45050 | 0.0 | - | | 0.8088 | 45100 | 0.0 | - | | 0.8097 | 45150 | 0.0 | - | | 0.8106 | 45200 | 0.0 | - | | 0.8115 | 45250 | 0.0 | - | | 0.8124 | 45300 | 0.0 | - | | 0.8133 | 45350 | 0.0 | - | | 0.8142 | 45400 | 0.0 | - | | 0.8151 | 45450 | 0.0 | - | | 0.8160 | 45500 | 0.0 | - | | 0.8169 | 45550 | 0.0 | - | | 0.8178 | 45600 | 0.0 | - | | 0.8187 | 45650 | 0.0 | - | | 0.8196 | 45700 | 0.0 | - | | 0.8205 | 45750 | 0.0 | - | | 0.8214 | 45800 | 0.0 | - | | 0.8223 | 45850 | 0.0 | - | | 0.8232 | 45900 | 0.0 | - | | 0.8241 | 45950 | 0.0 | - | | 0.8249 | 46000 | 0.0 | - | | 0.8258 | 46050 | 0.0 | - | | 0.8267 | 46100 | 0.0211 | - | | 0.8276 | 46150 | 0.0 | - | | 0.8285 | 46200 | 0.0 | - | | 0.8294 | 46250 | 0.0 | - | | 0.8303 | 46300 | 0.0 | - | | 0.8312 | 46350 | 0.0 | - | | 0.8321 | 46400 | 0.0 | - | | 0.8330 | 46450 | 0.0 | - | | 0.8339 | 46500 | 0.0 | - | | 0.8348 | 46550 | 0.0 | - | | 0.8357 | 46600 | 0.0 | - | | 0.8366 | 46650 | 0.0114 | - | | 0.8375 | 46700 | 0.0 | - | | 0.8384 | 46750 | 0.0 | - | | 0.8393 | 46800 | 0.0 | - | | 0.8402 | 46850 | 0.0 | - | | 0.8411 | 46900 | 0.0 | - | | 0.8420 | 46950 | 0.0 | - | | 0.8429 | 47000 | 0.0 | - | | 0.8438 | 47050 | 0.0 | - | | 0.8447 | 47100 | 0.0 | - | | 0.8456 | 47150 | 0.0 | - | | 0.8465 | 47200 | 0.0 | - | | 0.8474 | 47250 | 0.0 | - | | 0.8483 | 47300 | 0.0 | - | | 0.8492 | 47350 | 0.0 | - | | 0.8501 | 47400 | 0.0 | - | | 0.8510 | 47450 | 0.0 | - | | 0.8518 | 47500 | 0.0 | - | | 0.8527 | 47550 | 0.0 | - | | 0.8536 | 47600 | 0.0 | - | | 0.8545 | 47650 | 0.0 | - | | 0.8554 | 47700 | 0.0 | - | | 0.8563 | 47750 | 0.0 | - | | 0.8572 | 47800 | 0.0 | - | | 0.8581 | 47850 | 0.0 | - | | 0.8590 | 47900 | 0.0 | - | | 0.8599 | 47950 | 0.0 | - | | 0.8608 | 48000 | 0.0178 | - | | 0.8617 | 48050 | 0.0 | - | | 0.8626 | 48100 | 0.0 | - | | 0.8635 | 48150 | 0.0 | - | | 0.8644 | 48200 | 0.0 | - | | 0.8653 | 48250 | 0.0 | - | | 0.8662 | 48300 | 0.0 | - | | 0.8671 | 48350 | 0.0 | - | | 0.8680 | 48400 | 0.0146 | - | | 0.8689 | 48450 | 0.0 | - | | 0.8698 | 48500 | 0.0 | - | | 0.8707 | 48550 | 0.0 | - | | 0.8716 | 48600 | 0.0 | - | | 0.8725 | 48650 | 0.0 | - | | 0.8734 | 48700 | 0.0 | - | | 0.8743 | 48750 | 0.0 | - | | 0.8752 | 48800 | 0.0 | - | | 0.8761 | 48850 | 0.0146 | - | | 0.8770 | 48900 | 0.0 | - | | 0.8779 | 48950 | 0.0 | - | | 0.8788 | 49000 | 0.0 | - | | 0.8796 | 49050 | 0.0145 | - | | 0.8805 | 49100 | 0.0 | - | | 0.8814 | 49150 | 0.0 | - | | 0.8823 | 49200 | 0.0 | - | | 0.8832 | 49250 | 0.0 | - | | 0.8841 | 49300 | 0.0 | - | | 0.8850 | 49350 | 0.0 | - | | 0.8859 | 49400 | 0.0 | - | | 0.8868 | 49450 | 0.0 | - | | 0.8877 | 49500 | 0.0 | - | | 0.8886 | 49550 | 0.0 | - | | 0.8895 | 49600 | 0.0 | - | | 0.8904 | 49650 | 0.0 | - | | 0.8913 | 49700 | 0.0 | - | | 0.8922 | 49750 | 0.0 | - | | 0.8931 | 49800 | 0.0 | - | | 0.8940 | 49850 | 0.0 | - | | 0.8949 | 49900 | 0.0 | - | | 0.8958 | 49950 | 0.0 | - | | 0.8967 | 50000 | 0.0 | - | | 0.8976 | 50050 | 0.0 | - | | 0.8985 | 50100 | 0.0 | - | | 0.8994 | 50150 | 0.0 | - | | 0.9003 | 50200 | 0.0 | - | | 0.9012 | 50250 | 0.0 | - | | 0.9021 | 50300 | 0.0 | - | | 0.9030 | 50350 | 0.0 | - | | 0.9039 | 50400 | 0.0 | - | | 0.9048 | 50450 | 0.0 | - | | 0.9057 | 50500 | 0.0 | - | | 0.9065 | 50550 | 0.0 | - | | 0.9074 | 50600 | 0.0 | - | | 0.9083 | 50650 | 0.0 | - | | 0.9092 | 50700 | 0.0 | - | | 0.9101 | 50750 | 0.0 | - | | 0.9110 | 50800 | 0.0 | - | | 0.9119 | 50850 | 0.0 | - | | 0.9128 | 50900 | 0.0 | - | | 0.9137 | 50950 | 0.0 | - | | 0.9146 | 51000 | 0.0 | - | | 0.9155 | 51050 | 0.0163 | - | | 0.9164 | 51100 | 0.0 | - | | 0.9173 | 51150 | 0.0 | - | | 0.9182 | 51200 | 0.0 | - | | 0.9191 | 51250 | 0.0 | - | | 0.9200 | 51300 | 0.0 | - | | 0.9209 | 51350 | 0.0 | - | | 0.9218 | 51400 | 0.0 | - | | 0.9227 | 51450 | 0.0 | - | | 0.9236 | 51500 | 0.0 | - | | 0.9245 | 51550 | 0.0 | - | | 0.9254 | 51600 | 0.0 | - | | 0.9263 | 51650 | 0.0 | - | | 0.9272 | 51700 | 0.0 | - | | 0.9281 | 51750 | 0.0 | - | | 0.9290 | 51800 | 0.0 | - | | 0.9299 | 51850 | 0.0 | - | | 0.9308 | 51900 | 0.0 | - | | 0.9317 | 51950 | 0.0 | - | | 0.9326 | 52000 | 0.0 | - | | 0.9334 | 52050 | 0.0163 | - | | 0.9343 | 52100 | 0.0 | - | | 0.9352 | 52150 | 0.0 | - | | 0.9361 | 52200 | 0.0 | - | | 0.9370 | 52250 | 0.0 | - | | 0.9379 | 52300 | 0.0 | - | | 0.9388 | 52350 | 0.0 | - | | 0.9397 | 52400 | 0.0 | - | | 0.9406 | 52450 | 0.0 | - | | 0.9415 | 52500 | 0.0162 | - | | 0.9424 | 52550 | 0.0 | - | | 0.9433 | 52600 | 0.0 | - | | 0.9442 | 52650 | 0.0 | - | | 0.9451 | 52700 | 0.0 | - | | 0.9460 | 52750 | 0.0 | - | | 0.9469 | 52800 | 0.0 | - | | 0.9478 | 52850 | 0.0149 | - | | 0.9487 | 52900 | 0.0 | - | | 0.9496 | 52950 | 0.0 | - | | 0.9505 | 53000 | 0.0 | - | | 0.9514 | 53050 | 0.0 | - | | 0.9523 | 53100 | 0.0 | - | | 0.9532 | 53150 | 0.0 | - | | 0.9541 | 53200 | 0.0 | - | | 0.9550 | 53250 | 0.0 | - | | 0.9559 | 53300 | 0.0 | - | | 0.9568 | 53350 | 0.0 | - | | 0.9577 | 53400 | 0.0 | - | | 0.9586 | 53450 | 0.0 | - | | 0.9595 | 53500 | 0.0 | - | | 0.9603 | 53550 | 0.0 | - | | 0.9612 | 53600 | 0.0 | - | | 0.9621 | 53650 | 0.0 | - | | 0.9630 | 53700 | 0.0 | - | | 0.9639 | 53750 | 0.0 | - | | 0.9648 | 53800 | 0.0 | - | | 0.9657 | 53850 | 0.0 | - | | 0.9666 | 53900 | 0.0 | - | | 0.9675 | 53950 | 0.0 | - | | 0.9684 | 54000 | 0.0 | - | | 0.9693 | 54050 | 0.0 | - | | 0.9702 | 54100 | 0.0 | - | | 0.9711 | 54150 | 0.0 | - | | 0.9720 | 54200 | 0.0 | - | | 0.9729 | 54250 | 0.0 | - | | 0.9738 | 54300 | 0.0 | - | | 0.9747 | 54350 | 0.0 | - | | 0.9756 | 54400 | 0.0 | - | | 0.9765 | 54450 | 0.0 | - | | 0.9774 | 54500 | 0.0 | - | | 0.9783 | 54550 | 0.0 | - | | 0.9792 | 54600 | 0.0 | - | | 0.9801 | 54650 | 0.0 | - | | 0.9810 | 54700 | 0.0 | - | | 0.9819 | 54750 | 0.0 | - | | 0.9828 | 54800 | 0.0 | - | | 0.9837 | 54850 | 0.0 | - | | 0.9846 | 54900 | 0.0 | - | | 0.9855 | 54950 | 0.0 | - | | 0.9864 | 55000 | 0.0 | - | | 0.9872 | 55050 | 0.0 | - | | 0.9881 | 55100 | 0.0156 | - | | 0.9890 | 55150 | 0.0 | - | | 0.9899 | 55200 | 0.0 | - | | 0.9908 | 55250 | 0.0 | - | | 0.9917 | 55300 | 0.0 | - | | 0.9926 | 55350 | 0.0 | - | | 0.9935 | 55400 | 0.0 | - | | 0.9944 | 55450 | 0.0 | - | | 0.9953 | 55500 | 0.0 | - | | 0.9962 | 55550 | 0.0 | - | | 0.9971 | 55600 | 0.0 | - | | 0.9980 | 55650 | 0.0 | - | | 0.9989 | 55700 | 0.0 | - | | 0.9998 | 55750 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - spaCy: 3.7.4 - Transformers: 4.40.1 - PyTorch: 2.2.1+cu121 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"library_name": "setfit", "tags": ["setfit", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "sentence-transformers/all-MiniLM-L6-v2", "widget": [{"text": "netbook:I am not going to sit here and complain about it not having a cd drive and what not because it is a netbook, it is made to be compact and if you want all the other stuff get a laptop."}, {"text": "price:I finally decided on this laptop because it was the right price for what I need it."}, {"text": "shipped:This laptop looked brand new and was shipped very quickly."}, {"text": "business:They offer the best warranty in the business, and don't 3rd party it out like Toshiba."}, {"text": "email:My husband uses it mostly for games, email and music."}], "pipeline_tag": "text-classification", "inference": false, "model-index": [{"name": "SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.8947936336660373, "name": "Accuracy"}]}]}]}
marcelomoreno26/all-MiniLM-L6-v2-absa-aspect
null
[ "setfit", "safetensors", "bert", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/all-MiniLM-L6-v2", "model-index", "region:us" ]
null
2024-05-02T01:36:33+00:00
[ "2209.11055" ]
[]
TAGS #setfit #safetensors #bert #absa #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/all-MiniLM-L6-v2 #model-index #region-us
SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2 =============================================================== This is a SetFit model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses sentence-transformers/all-MiniLM-L6-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification. In particular, this model is in charge of filtering aspect span candidates. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a Sentence Transformer with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. This model was trained within the context of a larger system for ABSA, which looks like so: 1. Use a spaCy model to select possible aspect span candidates. 2. Use this SetFit model to filter these possible aspect span candidates. 3. Use a SetFit model to classify the filtered aspect span candidates. Model Details ------------- ### Model Description * Model Type: SetFit * Sentence Transformer body: sentence-transformers/all-MiniLM-L6-v2 * Classification head: a LogisticRegression instance * spaCy Model: en\_core\_web\_sm * SetFitABSA Aspect Model: marcelomoreno26/all-MiniLM-L6-v2-absa-aspect2 * SetFitABSA Polarity Model: setfit-absa-polarity * Maximum Sequence Length: 256 tokens * Number of Classes: 2 classes ### Model Sources * Repository: SetFit on GitHub * Paper: Efficient Few-Shot Learning Without Prompts * Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts ### Model Labels Evaluation ---------- ### Metrics Uses ---- ### Direct Use for Inference First install the SetFit library: Then you can load this model and run inference. Training Details ---------------- ### Training Set Metrics ### Training Hyperparameters * batch\_size: (16, 2) * num\_epochs: (1, 16) * max\_steps: -1 * sampling\_strategy: oversampling * body\_learning\_rate: (2e-05, 1e-05) * head\_learning\_rate: 0.01 * loss: CosineSimilarityLoss * distance\_metric: cosine\_distance * margin: 0.25 * end\_to\_end: False * use\_amp: False * warmup\_proportion: 0.1 * seed: 42 * eval\_max\_steps: -1 * load\_best\_model\_at\_end: False ### Training Results ### Framework Versions * Python: 3.10.12 * SetFit: 1.0.3 * Sentence Transformers: 2.7.0 * spaCy: 3.7.4 * Transformers: 4.40.1 * PyTorch: 2.2.1+cu121 * Datasets: 2.19.0 * Tokenizers: 0.19.1 ### BibTeX
[ "### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/all-MiniLM-L6-v2\n* Classification head: a LogisticRegression instance\n* spaCy Model: en\\_core\\_web\\_sm\n* SetFitABSA Aspect Model: marcelomoreno26/all-MiniLM-L6-v2-absa-aspect2\n* SetFitABSA Polarity Model: setfit-absa-polarity\n* Maximum Sequence Length: 256 tokens\n* Number of Classes: 2 classes", "### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts", "### Model Labels\n\n\n\nEvaluation\n----------", "### Metrics\n\n\n\nUses\n----", "### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------", "### Training Set Metrics", "### Training Hyperparameters\n\n\n* batch\\_size: (16, 2)\n* num\\_epochs: (1, 16)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False", "### Training Results", "### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* spaCy: 3.7.4\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1", "### BibTeX" ]
[ "TAGS\n#setfit #safetensors #bert #absa #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/all-MiniLM-L6-v2 #model-index #region-us \n", "### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/all-MiniLM-L6-v2\n* Classification head: a LogisticRegression instance\n* spaCy Model: en\\_core\\_web\\_sm\n* SetFitABSA Aspect Model: marcelomoreno26/all-MiniLM-L6-v2-absa-aspect2\n* SetFitABSA Polarity Model: setfit-absa-polarity\n* Maximum Sequence Length: 256 tokens\n* Number of Classes: 2 classes", "### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts", "### Model Labels\n\n\n\nEvaluation\n----------", "### Metrics\n\n\n\nUses\n----", "### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------", "### Training Set Metrics", "### Training Hyperparameters\n\n\n* batch\\_size: (16, 2)\n* num\\_epochs: (1, 16)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False", "### Training Results", "### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* spaCy: 3.7.4\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1", "### BibTeX" ]
[ 69, 122, 42, 16, 10, 43, 7, 169, 5, 84, 6 ]
[ "TAGS\n#setfit #safetensors #bert #absa #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/all-MiniLM-L6-v2 #model-index #region-us \n### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/all-MiniLM-L6-v2\n* Classification head: a LogisticRegression instance\n* spaCy Model: en\\_core\\_web\\_sm\n* SetFitABSA Aspect Model: marcelomoreno26/all-MiniLM-L6-v2-absa-aspect2\n* SetFitABSA Polarity Model: setfit-absa-polarity\n* Maximum Sequence Length: 256 tokens\n* Number of Classes: 2 classes### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts### Model Labels\n\n\n\nEvaluation\n----------### Metrics\n\n\n\nUses\n----### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------### Training Set Metrics### Training Hyperparameters\n\n\n* batch\\_size: (16, 2)\n* num\\_epochs: (1, 16)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False### Training Results### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* spaCy: 3.7.4\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1### BibTeX" ]
null
null
# Marcoro14-7B-slerp Marcoro14-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [Rimyy/Llama-2-7b-chat-finetuneGSMdata](https://huggingface.co/Rimyy/Llama-2-7b-chat-finetuneGSMdata) * [Rimyy/Gemma-2b-finetuneGSMdata5ep](https://huggingface.co/Rimyy/Gemma-2b-finetuneGSMdata5ep) ## 🧩 Configuration \```yaml slices: - sources: - model: Rimyy/Llama-2-7b-chat-finetuneGSMdata layer_range: [0, 10] - model: Rimyy/Gemma-2b-finetuneGSMdata5ep layer_range: [0, 10] merge_method: slerp base_model: Rimyy/Llama-2-7b-chat-finetuneGSMdata parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 \```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "Rimyy/Llama-2-7b-chat-finetuneGSMdata", "Rimyy/Gemma-2b-finetuneGSMdata5ep"]}
Rimyy/Marcoro14-7B-slerp
null
[ "merge", "mergekit", "lazymergekit", "Rimyy/Llama-2-7b-chat-finetuneGSMdata", "Rimyy/Gemma-2b-finetuneGSMdata5ep", "license:apache-2.0", "region:us" ]
null
2024-05-02T01:36:34+00:00
[]
[]
TAGS #merge #mergekit #lazymergekit #Rimyy/Llama-2-7b-chat-finetuneGSMdata #Rimyy/Gemma-2b-finetuneGSMdata5ep #license-apache-2.0 #region-us
# Marcoro14-7B-slerp Marcoro14-7B-slerp is a merge of the following models using mergekit: * Rimyy/Llama-2-7b-chat-finetuneGSMdata * Rimyy/Gemma-2b-finetuneGSMdata5ep ## Configuration \
[ "# Marcoro14-7B-slerp\n\nMarcoro14-7B-slerp is a merge of the following models using mergekit:\n* Rimyy/Llama-2-7b-chat-finetuneGSMdata\n* Rimyy/Gemma-2b-finetuneGSMdata5ep", "## Configuration\n\n\\" ]
[ "TAGS\n#merge #mergekit #lazymergekit #Rimyy/Llama-2-7b-chat-finetuneGSMdata #Rimyy/Gemma-2b-finetuneGSMdata5ep #license-apache-2.0 #region-us \n", "# Marcoro14-7B-slerp\n\nMarcoro14-7B-slerp is a merge of the following models using mergekit:\n* Rimyy/Llama-2-7b-chat-finetuneGSMdata\n* Rimyy/Gemma-2b-finetuneGSMdata5ep", "## Configuration\n\n\\" ]
[ 62, 71, 4 ]
[ "TAGS\n#merge #mergekit #lazymergekit #Rimyy/Llama-2-7b-chat-finetuneGSMdata #Rimyy/Gemma-2b-finetuneGSMdata5ep #license-apache-2.0 #region-us \n# Marcoro14-7B-slerp\n\nMarcoro14-7B-slerp is a merge of the following models using mergekit:\n* Rimyy/Llama-2-7b-chat-finetuneGSMdata\n* Rimyy/Gemma-2b-finetuneGSMdata5ep## Configuration\n\n\\" ]
text-classification
setfit
# SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. This model was trained within the context of a larger system for ABSA, which looks like so: 1. Use a spaCy model to select possible aspect span candidates. 2. Use a SetFit model to filter these possible aspect span candidates. 3. **Use this SetFit model to classify the filtered aspect span candidates.** ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **spaCy Model:** en_core_web_sm - **SetFitABSA Aspect Model:** [setfit-absa-aspect](https://huggingface.co/setfit-absa-aspect) - **SetFitABSA Polarity Model:** [marcelomoreno26/all-mpnet-base-v2-absa-polarity2](https://huggingface.co/marcelomoreno26/all-mpnet-base-v2-absa-polarity2) - **Maximum Sequence Length:** 384 tokens - **Number of Classes:** 4 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | neutral | <ul><li>'skip taking the cord with me because:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'The tech guy then said the:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li><li>'all dark, power light steady, hard:\xa0One night I turned the freaking thing off after using it, the next day I turn it on, no GUI, screen all dark, power light steady, hard drive light steady and not flashing as it usually does.'</li></ul> | | positive | <ul><li>'of the good battery life.:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'is of high quality, has a:it is of high quality, has a killer GUI, is extremely stable, is highly expandable, is bundled with lots of very good applications, is easy to use, and is absolutely gorgeous.'</li><li>'has a killer GUI, is extremely:it is of high quality, has a killer GUI, is extremely stable, is highly expandable, is bundled with lots of very good applications, is easy to use, and is absolutely gorgeous.'</li></ul> | | negative | <ul><li>'then said the service center does not do:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li><li>'concern to the "sales" team, which is:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li><li>'on, no GUI, screen all:\xa0One night I turned the freaking thing off after using it, the next day I turn it on, no GUI, screen all dark, power light steady, hard drive light steady and not flashing as it usually does.'</li></ul> | | conflict | <ul><li>'-No backlit keyboard, but not:-No backlit keyboard, but not an issue for me.'</li><li>"to replace the battery once, but:I did have to replace the battery once, but that was only a couple months ago and it's been working perfect ever since."</li><li>'Yes, they cost more, but:Yes, they cost more, but they more than make up for it in speed, construction quality, and longevity.'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.7788 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import AbsaModel # Download from the 🤗 Hub model = AbsaModel.from_pretrained( "setfit-absa-aspect", "marcelomoreno26/all-mpnet-base-v2-absa-polarity2", ) # Run inference preds = model("The food was great, but the venue is just way too busy.") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 3 | 24.3447 | 80 | | Label | Training Sample Count | |:---------|:----------------------| | negative | 235 | | neutral | 127 | | positive | 271 | ### Training Hyperparameters - batch_size: (16, 2) - num_epochs: (1, 16) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:-----:|:-------------:|:---------------:| | 0.3333 | 1 | 0.3749 | - | | 0.0030 | 50 | 0.3097 | - | | 0.0059 | 100 | 0.2214 | - | | 0.0089 | 150 | 0.2125 | - | | 0.0119 | 200 | 0.3202 | - | | 0.0148 | 250 | 0.1878 | - | | 0.0178 | 300 | 0.1208 | - | | 0.0208 | 350 | 0.2414 | - | | 0.0237 | 400 | 0.1961 | - | | 0.0267 | 450 | 0.0607 | - | | 0.0296 | 500 | 0.1103 | - | | 0.0326 | 550 | 0.1213 | - | | 0.0356 | 600 | 0.0972 | - | | 0.0385 | 650 | 0.0124 | - | | 0.0415 | 700 | 0.0151 | - | | 0.0445 | 750 | 0.1517 | - | | 0.0474 | 800 | 0.004 | - | | 0.0504 | 850 | 0.0204 | - | | 0.0534 | 900 | 0.0541 | - | | 0.0563 | 950 | 0.003 | - | | 0.0593 | 1000 | 0.0008 | - | | 0.0623 | 1050 | 0.0703 | - | | 0.0652 | 1100 | 0.0013 | - | | 0.0682 | 1150 | 0.0007 | - | | 0.0712 | 1200 | 0.0009 | - | | 0.0741 | 1250 | 0.0004 | - | | 0.0771 | 1300 | 0.0004 | - | | 0.0801 | 1350 | 0.0005 | - | | 0.0830 | 1400 | 0.0006 | - | | 0.0860 | 1450 | 0.0004 | - | | 0.0889 | 1500 | 0.0002 | - | | 0.0919 | 1550 | 0.0002 | - | | 0.0949 | 1600 | 0.0001 | - | | 0.0978 | 1650 | 0.0006 | - | | 0.1008 | 1700 | 0.0002 | - | | 0.1038 | 1750 | 0.0012 | - | | 0.1067 | 1800 | 0.0008 | - | | 0.1097 | 1850 | 0.0048 | - | | 0.1127 | 1900 | 0.0007 | - | | 0.1156 | 1950 | 0.0001 | - | | 0.1186 | 2000 | 0.0001 | - | | 0.1216 | 2050 | 0.0001 | - | | 0.1245 | 2100 | 0.0001 | - | | 0.1275 | 2150 | 0.0001 | - | | 0.1305 | 2200 | 0.0001 | - | | 0.1334 | 2250 | 0.0 | - | | 0.1364 | 2300 | 0.0001 | - | | 0.1394 | 2350 | 0.0002 | - | | 0.1423 | 2400 | 0.0 | - | | 0.1453 | 2450 | 0.0 | - | | 0.1482 | 2500 | 0.0589 | - | | 0.1512 | 2550 | 0.0036 | - | | 0.1542 | 2600 | 0.0013 | - | | 0.1571 | 2650 | 0.0 | - | | 0.1601 | 2700 | 0.0001 | - | | 0.1631 | 2750 | 0.0004 | - | | 0.1660 | 2800 | 0.0 | - | | 0.1690 | 2850 | 0.0002 | - | | 0.1720 | 2900 | 0.0096 | - | | 0.1749 | 2950 | 0.0 | - | | 0.1779 | 3000 | 0.0 | - | | 0.1809 | 3050 | 0.0001 | - | | 0.1838 | 3100 | 0.0 | - | | 0.1868 | 3150 | 0.0001 | - | | 0.1898 | 3200 | 0.0001 | - | | 0.1927 | 3250 | 0.0 | - | | 0.1957 | 3300 | 0.0 | - | | 0.1986 | 3350 | 0.0001 | - | | 0.2016 | 3400 | 0.0 | - | | 0.2046 | 3450 | 0.0002 | - | | 0.2075 | 3500 | 0.0 | - | | 0.2105 | 3550 | 0.0 | - | | 0.2135 | 3600 | 0.0001 | - | | 0.2164 | 3650 | 0.0 | - | | 0.2194 | 3700 | 0.0 | - | | 0.2224 | 3750 | 0.0001 | - | | 0.2253 | 3800 | 0.0 | - | | 0.2283 | 3850 | 0.0 | - | | 0.2313 | 3900 | 0.0 | - | | 0.2342 | 3950 | 0.0 | - | | 0.2372 | 4000 | 0.0 | - | | 0.2402 | 4050 | 0.0 | - | | 0.2431 | 4100 | 0.0 | - | | 0.2461 | 4150 | 0.0 | - | | 0.2491 | 4200 | 0.0 | - | | 0.2520 | 4250 | 0.0 | - | | 0.2550 | 4300 | 0.0 | - | | 0.2579 | 4350 | 0.0 | - | | 0.2609 | 4400 | 0.0 | - | | 0.2639 | 4450 | 0.0 | - | | 0.2668 | 4500 | 0.0 | - | | 0.2698 | 4550 | 0.0 | - | | 0.2728 | 4600 | 0.0 | - | | 0.2757 | 4650 | 0.0 | - | | 0.2787 | 4700 | 0.0 | - | | 0.2817 | 4750 | 0.0 | - | | 0.2846 | 4800 | 0.0 | - | | 0.2876 | 4850 | 0.0001 | - | | 0.2906 | 4900 | 0.0071 | - | | 0.2935 | 4950 | 0.1151 | - | | 0.2965 | 5000 | 0.0055 | - | | 0.2995 | 5050 | 0.0005 | - | | 0.3024 | 5100 | 0.0041 | - | | 0.3054 | 5150 | 0.0001 | - | | 0.3083 | 5200 | 0.0003 | - | | 0.3113 | 5250 | 0.0001 | - | | 0.3143 | 5300 | 0.0 | - | | 0.3172 | 5350 | 0.0001 | - | | 0.3202 | 5400 | 0.0 | - | | 0.3232 | 5450 | 0.0 | - | | 0.3261 | 5500 | 0.0 | - | | 0.3291 | 5550 | 0.0 | - | | 0.3321 | 5600 | 0.0 | - | | 0.3350 | 5650 | 0.0 | - | | 0.3380 | 5700 | 0.0 | - | | 0.3410 | 5750 | 0.0 | - | | 0.3439 | 5800 | 0.0 | - | | 0.3469 | 5850 | 0.0 | - | | 0.3499 | 5900 | 0.0 | - | | 0.3528 | 5950 | 0.0 | - | | 0.3558 | 6000 | 0.0 | - | | 0.3588 | 6050 | 0.0 | - | | 0.3617 | 6100 | 0.0 | - | | 0.3647 | 6150 | 0.0 | - | | 0.3676 | 6200 | 0.0 | - | | 0.3706 | 6250 | 0.0 | - | | 0.3736 | 6300 | 0.0 | - | | 0.3765 | 6350 | 0.0 | - | | 0.3795 | 6400 | 0.0 | - | | 0.3825 | 6450 | 0.0 | - | | 0.3854 | 6500 | 0.0 | - | | 0.3884 | 6550 | 0.0 | - | | 0.3914 | 6600 | 0.0 | - | | 0.3943 | 6650 | 0.0 | - | | 0.3973 | 6700 | 0.0 | - | | 0.4003 | 6750 | 0.0 | - | | 0.4032 | 6800 | 0.0 | - | | 0.4062 | 6850 | 0.0 | - | | 0.4092 | 6900 | 0.0 | - | | 0.4121 | 6950 | 0.0 | - | | 0.4151 | 7000 | 0.0 | - | | 0.4181 | 7050 | 0.0 | - | | 0.4210 | 7100 | 0.0 | - | | 0.4240 | 7150 | 0.0 | - | | 0.4269 | 7200 | 0.0 | - | | 0.4299 | 7250 | 0.0 | - | | 0.4329 | 7300 | 0.0 | - | | 0.4358 | 7350 | 0.0 | - | | 0.4388 | 7400 | 0.0 | - | | 0.4418 | 7450 | 0.0 | - | | 0.4447 | 7500 | 0.0 | - | | 0.4477 | 7550 | 0.0 | - | | 0.4507 | 7600 | 0.0 | - | | 0.4536 | 7650 | 0.0003 | - | | 0.4566 | 7700 | 0.0 | - | | 0.4596 | 7750 | 0.0 | - | | 0.4625 | 7800 | 0.0 | - | | 0.4655 | 7850 | 0.0 | - | | 0.4685 | 7900 | 0.0 | - | | 0.4714 | 7950 | 0.0 | - | | 0.4744 | 8000 | 0.0 | - | | 0.4773 | 8050 | 0.0 | - | | 0.4803 | 8100 | 0.0 | - | | 0.4833 | 8150 | 0.0 | - | | 0.4862 | 8200 | 0.0 | - | | 0.4892 | 8250 | 0.0 | - | | 0.4922 | 8300 | 0.0 | - | | 0.4951 | 8350 | 0.0 | - | | 0.4981 | 8400 | 0.0 | - | | 0.5011 | 8450 | 0.0 | - | | 0.5040 | 8500 | 0.0 | - | | 0.5070 | 8550 | 0.0 | - | | 0.5100 | 8600 | 0.0 | - | | 0.5129 | 8650 | 0.0 | - | | 0.5159 | 8700 | 0.0 | - | | 0.5189 | 8750 | 0.0 | - | | 0.5218 | 8800 | 0.0 | - | | 0.5248 | 8850 | 0.0 | - | | 0.5278 | 8900 | 0.0 | - | | 0.5307 | 8950 | 0.0 | - | | 0.5337 | 9000 | 0.0 | - | | 0.5366 | 9050 | 0.0 | - | | 0.5396 | 9100 | 0.0 | - | | 0.5426 | 9150 | 0.0 | - | | 0.5455 | 9200 | 0.0 | - | | 0.5485 | 9250 | 0.0 | - | | 0.5515 | 9300 | 0.0 | - | | 0.5544 | 9350 | 0.0 | - | | 0.5574 | 9400 | 0.0 | - | | 0.5604 | 9450 | 0.0 | - | | 0.5633 | 9500 | 0.0 | - | | 0.5663 | 9550 | 0.0 | - | | 0.5693 | 9600 | 0.0 | - | | 0.5722 | 9650 | 0.0 | - | | 0.5752 | 9700 | 0.0 | - | | 0.5782 | 9750 | 0.0 | - | | 0.5811 | 9800 | 0.0 | - | | 0.5841 | 9850 | 0.0 | - | | 0.5870 | 9900 | 0.0 | - | | 0.5900 | 9950 | 0.0 | - | | 0.5930 | 10000 | 0.0 | - | | 0.5959 | 10050 | 0.0 | - | | 0.5989 | 10100 | 0.0 | - | | 0.6019 | 10150 | 0.0 | - | | 0.6048 | 10200 | 0.0 | - | | 0.6078 | 10250 | 0.0 | - | | 0.6108 | 10300 | 0.0 | - | | 0.6137 | 10350 | 0.0 | - | | 0.6167 | 10400 | 0.0 | - | | 0.6197 | 10450 | 0.0 | - | | 0.6226 | 10500 | 0.0 | - | | 0.6256 | 10550 | 0.0 | - | | 0.6286 | 10600 | 0.0 | - | | 0.6315 | 10650 | 0.0 | - | | 0.6345 | 10700 | 0.0 | - | | 0.6375 | 10750 | 0.0 | - | | 0.6404 | 10800 | 0.0 | - | | 0.6434 | 10850 | 0.0 | - | | 0.6463 | 10900 | 0.0 | - | | 0.6493 | 10950 | 0.0 | - | | 0.6523 | 11000 | 0.0 | - | | 0.6552 | 11050 | 0.0 | - | | 0.6582 | 11100 | 0.0 | - | | 0.6612 | 11150 | 0.0 | - | | 0.6641 | 11200 | 0.0 | - | | 0.6671 | 11250 | 0.0 | - | | 0.6701 | 11300 | 0.0 | - | | 0.6730 | 11350 | 0.0 | - | | 0.6760 | 11400 | 0.0 | - | | 0.6790 | 11450 | 0.0 | - | | 0.6819 | 11500 | 0.0 | - | | 0.6849 | 11550 | 0.0 | - | | 0.6879 | 11600 | 0.0 | - | | 0.6908 | 11650 | 0.0 | - | | 0.6938 | 11700 | 0.0 | - | | 0.6968 | 11750 | 0.0 | - | | 0.6997 | 11800 | 0.0 | - | | 0.7027 | 11850 | 0.0 | - | | 0.7056 | 11900 | 0.0 | - | | 0.7086 | 11950 | 0.0 | - | | 0.7116 | 12000 | 0.0 | - | | 0.7145 | 12050 | 0.0 | - | | 0.7175 | 12100 | 0.0 | - | | 0.7205 | 12150 | 0.0 | - | | 0.7234 | 12200 | 0.0 | - | | 0.7264 | 12250 | 0.0 | - | | 0.7294 | 12300 | 0.0 | - | | 0.7323 | 12350 | 0.0 | - | | 0.7353 | 12400 | 0.0 | - | | 0.7383 | 12450 | 0.0 | - | | 0.7412 | 12500 | 0.0 | - | | 0.7442 | 12550 | 0.0 | - | | 0.7472 | 12600 | 0.0 | - | | 0.7501 | 12650 | 0.0 | - | | 0.7531 | 12700 | 0.0 | - | | 0.7560 | 12750 | 0.0 | - | | 0.7590 | 12800 | 0.0 | - | | 0.7620 | 12850 | 0.0 | - | | 0.7649 | 12900 | 0.0 | - | | 0.7679 | 12950 | 0.0 | - | | 0.7709 | 13000 | 0.0 | - | | 0.7738 | 13050 | 0.0 | - | | 0.7768 | 13100 | 0.0 | - | | 0.7798 | 13150 | 0.0 | - | | 0.7827 | 13200 | 0.0 | - | | 0.7857 | 13250 | 0.0 | - | | 0.7887 | 13300 | 0.0 | - | | 0.7916 | 13350 | 0.0 | - | | 0.7946 | 13400 | 0.0 | - | | 0.7976 | 13450 | 0.0 | - | | 0.8005 | 13500 | 0.0 | - | | 0.8035 | 13550 | 0.0 | - | | 0.8065 | 13600 | 0.0 | - | | 0.8094 | 13650 | 0.0 | - | | 0.8124 | 13700 | 0.0 | - | | 0.8153 | 13750 | 0.0 | - | | 0.8183 | 13800 | 0.0 | - | | 0.8213 | 13850 | 0.0 | - | | 0.8242 | 13900 | 0.0 | - | | 0.8272 | 13950 | 0.0 | - | | 0.8302 | 14000 | 0.0 | - | | 0.8331 | 14050 | 0.0 | - | | 0.8361 | 14100 | 0.0 | - | | 0.8391 | 14150 | 0.0 | - | | 0.8420 | 14200 | 0.0 | - | | 0.8450 | 14250 | 0.0 | - | | 0.8480 | 14300 | 0.0 | - | | 0.8509 | 14350 | 0.0 | - | | 0.8539 | 14400 | 0.0 | - | | 0.8569 | 14450 | 0.0 | - | | 0.8598 | 14500 | 0.0 | - | | 0.8628 | 14550 | 0.0 | - | | 0.8657 | 14600 | 0.0 | - | | 0.8687 | 14650 | 0.0 | - | | 0.8717 | 14700 | 0.0 | - | | 0.8746 | 14750 | 0.0 | - | | 0.8776 | 14800 | 0.0 | - | | 0.8806 | 14850 | 0.0 | - | | 0.8835 | 14900 | 0.0 | - | | 0.8865 | 14950 | 0.0 | - | | 0.8895 | 15000 | 0.0 | - | | 0.8924 | 15050 | 0.0 | - | | 0.8954 | 15100 | 0.0 | - | | 0.8984 | 15150 | 0.0 | - | | 0.9013 | 15200 | 0.0 | - | | 0.9043 | 15250 | 0.0 | - | | 0.9073 | 15300 | 0.0 | - | | 0.9102 | 15350 | 0.0 | - | | 0.9132 | 15400 | 0.0 | - | | 0.9162 | 15450 | 0.0 | - | | 0.9191 | 15500 | 0.0 | - | | 0.9221 | 15550 | 0.0 | - | | 0.9250 | 15600 | 0.0 | - | | 0.9280 | 15650 | 0.0 | - | | 0.9310 | 15700 | 0.0 | - | | 0.9339 | 15750 | 0.0 | - | | 0.9369 | 15800 | 0.0 | - | | 0.9399 | 15850 | 0.0 | - | | 0.9428 | 15900 | 0.0 | - | | 0.9458 | 15950 | 0.0 | - | | 0.9488 | 16000 | 0.0 | - | | 0.9517 | 16050 | 0.0 | - | | 0.9547 | 16100 | 0.0 | - | | 0.9577 | 16150 | 0.0 | - | | 0.9606 | 16200 | 0.0 | - | | 0.9636 | 16250 | 0.0 | - | | 0.9666 | 16300 | 0.0 | - | | 0.9695 | 16350 | 0.0 | - | | 0.9725 | 16400 | 0.0 | - | | 0.9755 | 16450 | 0.0 | - | | 0.9784 | 16500 | 0.0 | - | | 0.9814 | 16550 | 0.0 | - | | 0.9843 | 16600 | 0.0 | - | | 0.9873 | 16650 | 0.0 | - | | 0.9903 | 16700 | 0.0 | - | | 0.9932 | 16750 | 0.0 | - | | 0.9962 | 16800 | 0.0 | - | | 0.9992 | 16850 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - spaCy: 3.7.4 - Transformers: 4.40.1 - PyTorch: 2.2.1+cu121 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"library_name": "setfit", "tags": ["setfit", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "sentence-transformers/all-mpnet-base-v2", "widget": [{"text": "Needs Power and Mouse Cable to Plug in:Needs Power and Mouse Cable to Plug in back instead of side, In the way of operating a mouse in small area."}, {"text": "wireless router via built-in wireless took no time:Connecting to my wireless router via built-in wireless took no time at all."}, {"text": "The battery life is probably an:The battery life is probably an hour at best."}, {"text": "and with free shipping and no tax:The 13\" Macbook Pro just fits in my budget and with free shipping and no tax to CA this is the best price we can get for a great product."}, {"text": "product is top quality.:The price was very good, and the product is top quality."}], "pipeline_tag": "text-classification", "inference": false, "model-index": [{"name": "SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.7788235294117647, "name": "Accuracy"}]}]}]}
marcelomoreno26/all-mpnet-base-v2-absa-polarity
null
[ "setfit", "safetensors", "mpnet", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/all-mpnet-base-v2", "model-index", "region:us" ]
null
2024-05-02T01:36:46+00:00
[ "2209.11055" ]
[]
TAGS #setfit #safetensors #mpnet #absa #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/all-mpnet-base-v2 #model-index #region-us
SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2 ================================================================== This is a SetFit model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses sentence-transformers/all-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification. In particular, this model is in charge of classifying aspect polarities. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a Sentence Transformer with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. This model was trained within the context of a larger system for ABSA, which looks like so: 1. Use a spaCy model to select possible aspect span candidates. 2. Use a SetFit model to filter these possible aspect span candidates. 3. Use this SetFit model to classify the filtered aspect span candidates. Model Details ------------- ### Model Description * Model Type: SetFit * Sentence Transformer body: sentence-transformers/all-mpnet-base-v2 * Classification head: a LogisticRegression instance * spaCy Model: en\_core\_web\_sm * SetFitABSA Aspect Model: setfit-absa-aspect * SetFitABSA Polarity Model: marcelomoreno26/all-mpnet-base-v2-absa-polarity2 * Maximum Sequence Length: 384 tokens * Number of Classes: 4 classes ### Model Sources * Repository: SetFit on GitHub * Paper: Efficient Few-Shot Learning Without Prompts * Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts ### Model Labels Evaluation ---------- ### Metrics Uses ---- ### Direct Use for Inference First install the SetFit library: Then you can load this model and run inference. Training Details ---------------- ### Training Set Metrics ### Training Hyperparameters * batch\_size: (16, 2) * num\_epochs: (1, 16) * max\_steps: -1 * sampling\_strategy: oversampling * body\_learning\_rate: (2e-05, 1e-05) * head\_learning\_rate: 0.01 * loss: CosineSimilarityLoss * distance\_metric: cosine\_distance * margin: 0.25 * end\_to\_end: False * use\_amp: False * warmup\_proportion: 0.1 * seed: 42 * eval\_max\_steps: -1 * load\_best\_model\_at\_end: False ### Training Results ### Framework Versions * Python: 3.10.12 * SetFit: 1.0.3 * Sentence Transformers: 2.7.0 * spaCy: 3.7.4 * Transformers: 4.40.1 * PyTorch: 2.2.1+cu121 * Datasets: 2.19.0 * Tokenizers: 0.19.1 ### BibTeX
[ "### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/all-mpnet-base-v2\n* Classification head: a LogisticRegression instance\n* spaCy Model: en\\_core\\_web\\_sm\n* SetFitABSA Aspect Model: setfit-absa-aspect\n* SetFitABSA Polarity Model: marcelomoreno26/all-mpnet-base-v2-absa-polarity2\n* Maximum Sequence Length: 384 tokens\n* Number of Classes: 4 classes", "### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts", "### Model Labels\n\n\n\nEvaluation\n----------", "### Metrics\n\n\n\nUses\n----", "### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------", "### Training Set Metrics", "### Training Hyperparameters\n\n\n* batch\\_size: (16, 2)\n* num\\_epochs: (1, 16)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False", "### Training Results", "### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* spaCy: 3.7.4\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1", "### BibTeX" ]
[ "TAGS\n#setfit #safetensors #mpnet #absa #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/all-mpnet-base-v2 #model-index #region-us \n", "### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/all-mpnet-base-v2\n* Classification head: a LogisticRegression instance\n* spaCy Model: en\\_core\\_web\\_sm\n* SetFitABSA Aspect Model: setfit-absa-aspect\n* SetFitABSA Polarity Model: marcelomoreno26/all-mpnet-base-v2-absa-polarity2\n* Maximum Sequence Length: 384 tokens\n* Number of Classes: 4 classes", "### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts", "### Model Labels\n\n\n\nEvaluation\n----------", "### Metrics\n\n\n\nUses\n----", "### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------", "### Training Set Metrics", "### Training Hyperparameters\n\n\n* batch\\_size: (16, 2)\n* num\\_epochs: (1, 16)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False", "### Training Results", "### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* spaCy: 3.7.4\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1", "### BibTeX" ]
[ 69, 121, 42, 16, 10, 43, 7, 169, 5, 84, 6 ]
[ "TAGS\n#setfit #safetensors #mpnet #absa #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/all-mpnet-base-v2 #model-index #region-us \n### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/all-mpnet-base-v2\n* Classification head: a LogisticRegression instance\n* spaCy Model: en\\_core\\_web\\_sm\n* SetFitABSA Aspect Model: setfit-absa-aspect\n* SetFitABSA Polarity Model: marcelomoreno26/all-mpnet-base-v2-absa-polarity2\n* Maximum Sequence Length: 384 tokens\n* Number of Classes: 4 classes### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts### Model Labels\n\n\n\nEvaluation\n----------### Metrics\n\n\n\nUses\n----### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------### Training Set Metrics### Training Hyperparameters\n\n\n* batch\\_size: (16, 2)\n* num\\_epochs: (1, 16)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False### Training Results### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* spaCy: 3.7.4\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1### BibTeX" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_InstructionN1_SOAPL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_InstructionN1_SOAPL_v1", "results": []}]}
ThuyNT/CS505_COQE_viT5_train_InstructionN1_SOAPL_v1
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T01:36:52+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# CS505_COQE_viT5_train_InstructionN1_SOAPL_v1 This model is a fine-tuned version of VietAI/vit5-large on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# CS505_COQE_viT5_train_InstructionN1_SOAPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# CS505_COQE_viT5_train_InstructionN1_SOAPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ 62, 47, 7, 9, 9, 4, 102, 5, 40 ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# CS505_COQE_viT5_train_InstructionN1_SOAPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "peft", "tags": ["unsloth", "trl", "sft"], "pipeline_tag": "text-generation", "widget": [{"text": "You will assist me in generating MCQ questions along with their Answers and Choices. Please use the next context to guide you generating of MCQ questions ### Context: : The advantages of Eating Fruits & Vegetables Vegetables and fruit are good for your health because of a lot of vitamins, nutrients and fiber in them. Fruits and vegetables are regarded as a healthy diet without adding chemicals or additives for carrying or storing foods. The United States Department of Agriculture suggests eating plenty of fruits and vegetables every day to get the advantages of eating a healthy diet. Disease Prevention The United States Department of Agriculture tells that eating fruits and vegetables helps fight strokes, cardiovascular disease and type 2 diabetes. Because of the potassium in fruits and vegetables, they can prevent the risk of developing kidney stones. The fiber in fruits and vegetables helps the body fight against coronary heart disease. Idea Health Since fruits and vegetables add to a balanced and healthy diet, eating these will help you with your idea. The vitamins in fruits and vegetables helps fight depress , according to the Mental Health Foundation. Vitamin B and folic acid found in some vegetables and bananas help you to relax. Eating fruits and vegetables can also help improve memory. Healthy Weight Eating fruits and vegetables helps you have a healthy weight. Fruits and vegetables naturally include less energy than other foods, according to the Center for Disease Control and Prevention. The fiber and water in the fruits and vegetables make you feel that you are full, so you will not feel that you have to eat more food.", "example_title": "Q1"}]}
shredder-31/Llamma-3_QG_V.1.0
null
[ "peft", "pytorch", "safetensors", "llama", "unsloth", "trl", "sft", "text-generation", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T01:37:36+00:00
[ "1910.09700" ]
[]
TAGS #peft #pytorch #safetensors #llama #unsloth #trl #sft #text-generation #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#peft #pytorch #safetensors #llama #unsloth #trl #sft #text-generation #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 49, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#peft #pytorch #safetensors #llama #unsloth #trl #sft #text-generation #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
null
# Quant Infos - Includes recent bpe pre-tokenizer fixes https://github.com/ggerganov/llama.cpp/pull/6920 - quants done with an importance matrix for improved quantization loss - K & IQ quants in basically all variants from Q6_K down to IQ1_S - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [8d608a81b7bd170f700648f8214e6f3279d4d715](https://github.com/ggerganov/llama.cpp/commit/8d608a81b7bd170f700648f8214e6f3279d4d715) (master from 2024-05-01) - Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) dataset. ``` ./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat ``` # Original Model Card: # Hermes 2 Pro - Llama-3 8B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png) ## Model Description Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house. This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation. Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below. This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - `<tools>`, `<tool_call>`, `<tool_response>` and their closing tags are single tokens now. This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling ## Example Outputs ### Ask for a structured JSON output: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ll2j2wkQffCsiSwUjfRUq.png) ### Write the plot for a story where anime became real life: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/h_7aXGXdm2p2ONYuDF4Ii.png) ### Coding Assistance ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/bBd0hyAb8w5rKUiN2w1I6.png) # Prompt Format Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vOYv9wJUMn1Xrf4BvmO_x.png) ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5520|± |0.0145| | | |acc_norm|0.5887|± |0.0144| |arc_easy | 0|acc |0.8350|± |0.0076| | | |acc_norm|0.8123|± |0.0080| |boolq | 1|acc |0.8584|± |0.0061| |hellaswag | 0|acc |0.6265|± |0.0048| | | |acc_norm|0.8053|± |0.0040| |openbookqa | 0|acc |0.3800|± |0.0217| | | |acc_norm|0.4580|± |0.0223| |piqa | 0|acc |0.8003|± |0.0093| | | |acc_norm|0.8118|± |0.0091| |winogrande | 0|acc |0.7490|± |0.0122| ``` Average: 72.62 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2520|± |0.0273| | | |acc_norm|0.2559|± |0.0274| |agieval_logiqa_en | 0|acc |0.3548|± |0.0188| | | |acc_norm|0.3625|± |0.0189| |agieval_lsat_ar | 0|acc |0.1826|± |0.0255| | | |acc_norm|0.1913|± |0.0260| |agieval_lsat_lr | 0|acc |0.5510|± |0.0220| | | |acc_norm|0.5255|± |0.0221| |agieval_lsat_rc | 0|acc |0.6431|± |0.0293| | | |acc_norm|0.6097|± |0.0298| |agieval_sat_en | 0|acc |0.7330|± |0.0309| | | |acc_norm|0.7039|± |0.0319| |agieval_sat_en_without_passage| 0|acc |0.4029|± |0.0343| | | |acc_norm|0.3689|± |0.0337| |agieval_sat_math | 0|acc |0.3909|± |0.0330| | | |acc_norm|0.3773|± |0.0328| ``` Average: 42.44 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5737|± |0.0360| |bigbench_date_understanding | 0|multiple_choice_grade|0.6667|± |0.0246| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.1755|± |0.0201| | | |exact_str_match |0.0000|± |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2014|± |0.0152| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5500|± |0.0288| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.4300|± |0.0222| |bigbench_navigate | 0|multiple_choice_grade|0.4980|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7010|± |0.0102| |bigbench_ruin_names | 0|multiple_choice_grade|0.4688|± |0.0236| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1974|± |0.0126| |bigbench_snarks | 0|multiple_choice_grade|0.7403|± |0.0327| |bigbench_sports_understanding | 0|multiple_choice_grade|0.5426|± |0.0159| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.5320|± |0.0158| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2280|± |0.0119| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1531|± |0.0086| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5500|± |0.0288| ``` Average: 43.55 ## TruthfulQA: ``` | Task |Version|Metric|Value| |Stderr| |-------------|------:|------|----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.410|± |0.0172| | | |mc2 |0.578|± |0.0157| ``` # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM import bitsandbytes, flash_attn tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Llama-3-8B', trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( "Hermes-2-Pro-Llama-3-8B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Pro-Llama-3-8B, url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B]https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)}, title={Hermes-2-Pro-Llama-3-8B}, author={"Teknium", "interstellarninja", "theemozilla", "karan4d", "huemin_art"} } ```
{"language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "gguf", "imatrix", "importance matrix"], "base_model": "NousResearch/Hermes-2-Pro-Llama-3-8B", "model-index": [{"name": "Hermes-2-Pro-Llama-3-8B", "results": []}]}
qwp4w3hyb/Hermes-2-Pro-Llama-3-8B-iMat-GGUF
null
[ "gguf", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "imatrix", "importance matrix", "en", "base_model:NousResearch/Hermes-2-Pro-Llama-3-8B", "license:apache-2.0", "region:us" ]
null
2024-05-02T01:38:20+00:00
[]
[ "en" ]
TAGS #gguf #Llama-3 #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #function calling #json mode #imatrix #importance matrix #en #base_model-NousResearch/Hermes-2-Pro-Llama-3-8B #license-apache-2.0 #region-us
# Quant Infos - Includes recent bpe pre-tokenizer fixes URL - quants done with an importance matrix for improved quantization loss - K & IQ quants in basically all variants from Q6_K down to IQ1_S - Quantized with URL commit 8d608a81b7bd170f700648f8214e6f3279d4d715 (master from 2024-05-01) - Imatrix generated with this dataset. # Original Model Card: # Hermes 2 Pro - Llama-3 8B !image/png ## Model Description Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house. This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation. Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below. This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - '<tools>', '<tool_call>', '<tool_response>' and their closing tags are single tokens now. This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI Learn more about the function calling system for this model on our github repo here: URL ## Example Outputs ### Ask for a structured JSON output: !image/png ### Write the plot for a story where anime became real life: !image/png ### Coding Assistance !image/png # Prompt Format Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): This prompt is available as a chat template, which means you can format messages using the 'tokenizer.apply_chat_template()' method: When tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\n' to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. To complete the function call, create a user prompt that follows the above system prompt, like so: The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: URL Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, 'tool' like so: The assistant will then read in that data from the function's response, and generate a natural language response: ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script 'URL' available here: URL Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks !image/png ## GPT4All: Average: 72.62 ## AGIEval: Average: 42.44 ## BigBench: Average: 43.55 ## TruthfulQA: # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: URL !image/png # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a URL backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: !image/png ## Quantized Versions: GGUF Versions Available Here: URL # How to cite:
[ "# Quant Infos\n\n- Includes recent bpe pre-tokenizer fixes URL\n- quants done with an importance matrix for improved quantization loss\n- K & IQ quants in basically all variants from Q6_K down to IQ1_S\n- Quantized with URL commit 8d608a81b7bd170f700648f8214e6f3279d4d715 (master from 2024-05-01)\n- Imatrix generated with this dataset.", "# Original Model Card:", "# Hermes 2 Pro - Llama-3 8B\n\n!image/png", "## Model Description\n\nHermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.\n\nThis new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.\n\nHermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.\n\nThis version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - '<tools>', '<tool_call>', '<tool_response>' and their closing tags are single tokens now.\n\nThis work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI\n\nLearn more about the function calling system for this model on our github repo here: URL", "## Example Outputs", "### Ask for a structured JSON output:\n!image/png", "### Write the plot for a story where anime became real life:\n!image/png", "### Coding Assistance\n!image/png", "# Prompt Format\n\nHermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\nThis is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.\n\nThis format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.\n\nPrompt with system instruction (Use whatever system prompt you like, this is just an example!):\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.\n\nTo utilize the prompt format without a system prompt, simply leave the line out.", "## Prompt Format for Function Calling\n\nOur model was trained on specific system prompts and structures for Function Calling. \n\nYou should use the system role with this message, followed by a function signature json as this example shows here.\n\n\nTo complete the function call, create a user prompt that follows the above system prompt, like so:\n\n\nThe model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: URL\n\n\nOnce you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, 'tool' like so:\n\n\nThe assistant will then read in that data from the function's response, and generate a natural language response:", "## Prompt Format for JSON Mode / Structured Outputs\n\nOur model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema.\n\nYour schema can be made from a pydantic object using our codebase, with the standalone script 'URL' available here: URL\n\n\n\nGiven the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.", "# Benchmarks\n\n!image/png", "## GPT4All:\n\nAverage: 72.62", "## AGIEval:\n\nAverage: 42.44", "## BigBench:\n\nAverage: 43.55", "## TruthfulQA:", "# Inference Code\n\nHere is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)\n\nNote: To use function calling, you should see the github repo above.", "## Inference Code for Function Calling:\n\nAll code for utilizing, parsing, and building function calling templates is available on our github:\nURL\n\n!image/png", "# Chat Interfaces\n\nWhen quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a URL backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.\nIn LM-Studio, simply select the ChatML Prefix on the settings side pane:\n\n!image/png", "## Quantized Versions:\n\nGGUF Versions Available Here: URL", "# How to cite:" ]
[ "TAGS\n#gguf #Llama-3 #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #function calling #json mode #imatrix #importance matrix #en #base_model-NousResearch/Hermes-2-Pro-Llama-3-8B #license-apache-2.0 #region-us \n", "# Quant Infos\n\n- Includes recent bpe pre-tokenizer fixes URL\n- quants done with an importance matrix for improved quantization loss\n- K & IQ quants in basically all variants from Q6_K down to IQ1_S\n- Quantized with URL commit 8d608a81b7bd170f700648f8214e6f3279d4d715 (master from 2024-05-01)\n- Imatrix generated with this dataset.", "# Original Model Card:", "# Hermes 2 Pro - Llama-3 8B\n\n!image/png", "## Model Description\n\nHermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.\n\nThis new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.\n\nHermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.\n\nThis version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - '<tools>', '<tool_call>', '<tool_response>' and their closing tags are single tokens now.\n\nThis work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI\n\nLearn more about the function calling system for this model on our github repo here: URL", "## Example Outputs", "### Ask for a structured JSON output:\n!image/png", "### Write the plot for a story where anime became real life:\n!image/png", "### Coding Assistance\n!image/png", "# Prompt Format\n\nHermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\nThis is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.\n\nThis format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.\n\nPrompt with system instruction (Use whatever system prompt you like, this is just an example!):\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.\n\nTo utilize the prompt format without a system prompt, simply leave the line out.", "## Prompt Format for Function Calling\n\nOur model was trained on specific system prompts and structures for Function Calling. \n\nYou should use the system role with this message, followed by a function signature json as this example shows here.\n\n\nTo complete the function call, create a user prompt that follows the above system prompt, like so:\n\n\nThe model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: URL\n\n\nOnce you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, 'tool' like so:\n\n\nThe assistant will then read in that data from the function's response, and generate a natural language response:", "## Prompt Format for JSON Mode / Structured Outputs\n\nOur model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema.\n\nYour schema can be made from a pydantic object using our codebase, with the standalone script 'URL' available here: URL\n\n\n\nGiven the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.", "# Benchmarks\n\n!image/png", "## GPT4All:\n\nAverage: 72.62", "## AGIEval:\n\nAverage: 42.44", "## BigBench:\n\nAverage: 43.55", "## TruthfulQA:", "# Inference Code\n\nHere is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)\n\nNote: To use function calling, you should see the github repo above.", "## Inference Code for Function Calling:\n\nAll code for utilizing, parsing, and building function calling templates is available on our github:\nURL\n\n!image/png", "# Chat Interfaces\n\nWhen quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a URL backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.\nIn LM-Studio, simply select the ChatML Prefix on the settings side pane:\n\n!image/png", "## Quantized Versions:\n\nGGUF Versions Available Here: URL", "# How to cite:" ]
[ 91, 113, 5, 16, 262, 4, 16, 20, 10, 264, 154, 121, 8, 12, 11, 11, 6, 50, 37, 109, 15, 5 ]
[ "TAGS\n#gguf #Llama-3 #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #function calling #json mode #imatrix #importance matrix #en #base_model-NousResearch/Hermes-2-Pro-Llama-3-8B #license-apache-2.0 #region-us \n# Quant Infos\n\n- Includes recent bpe pre-tokenizer fixes URL\n- quants done with an importance matrix for improved quantization loss\n- K & IQ quants in basically all variants from Q6_K down to IQ1_S\n- Quantized with URL commit 8d608a81b7bd170f700648f8214e6f3279d4d715 (master from 2024-05-01)\n- Imatrix generated with this dataset.# Original Model Card:# Hermes 2 Pro - Llama-3 8B\n\n!image/png## Model Description\n\nHermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.\n\nThis new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.\n\nHermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.\n\nThis version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - '<tools>', '<tool_call>', '<tool_response>' and their closing tags are single tokens now.\n\nThis work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI\n\nLearn more about the function calling system for this model on our github repo here: URL## Example Outputs### Ask for a structured JSON output:\n!image/png### Write the plot for a story where anime became real life:\n!image/png### Coding Assistance\n!image/png# Prompt Format\n\nHermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\nThis is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.\n\nThis format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.\n\nPrompt with system instruction (Use whatever system prompt you like, this is just an example!):\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.\n\nTo utilize the prompt format without a system prompt, simply leave the line out.## Prompt Format for Function Calling\n\nOur model was trained on specific system prompts and structures for Function Calling. \n\nYou should use the system role with this message, followed by a function signature json as this example shows here.\n\n\nTo complete the function call, create a user prompt that follows the above system prompt, like so:\n\n\nThe model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: URL\n\n\nOnce you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, 'tool' like so:\n\n\nThe assistant will then read in that data from the function's response, and generate a natural language response:## Prompt Format for JSON Mode / Structured Outputs\n\nOur model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema.\n\nYour schema can be made from a pydantic object using our codebase, with the standalone script 'URL' available here: URL\n\n\n\nGiven the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.# Benchmarks\n\n!image/png## GPT4All:\n\nAverage: 72.62## AGIEval:\n\nAverage: 42.44## BigBench:\n\nAverage: 43.55## TruthfulQA:# Inference Code\n\nHere is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)\n\nNote: To use function calling, you should see the github repo above.## Inference Code for Function Calling:\n\nAll code for utilizing, parsing, and building function calling templates is available on our github:\nURL\n\n!image/png# Chat Interfaces\n\nWhen quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a URL backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.\nIn LM-Studio, simply select the ChatML Prefix on the settings side pane:\n\n!image/png## Quantized Versions:\n\nGGUF Versions Available Here: URL# How to cite:" ]
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/mlabonne/ChimeraLlama-3-8B-v3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["merge", "mergekit", "lazymergekit"], "base_model": "mlabonne/ChimeraLlama-3-8B-v3", "quantized_by": "mradermacher"}
mradermacher/ChimeraLlama-3-8B-v3-GGUF
null
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:mlabonne/ChimeraLlama-3-8B-v3", "license:other", "endpoints_compatible", "region:us" ]
null
2024-05-02T01:42:16+00:00
[]
[ "en" ]
TAGS #transformers #gguf #merge #mergekit #lazymergekit #en #base_model-mlabonne/ChimeraLlama-3-8B-v3 #license-other #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #merge #mergekit #lazymergekit #en #base_model-mlabonne/ChimeraLlama-3-8B-v3 #license-other #endpoints_compatible #region-us \n" ]
[ 53 ]
[ "TAGS\n#transformers #gguf #merge #mergekit #lazymergekit #en #base_model-mlabonne/ChimeraLlama-3-8B-v3 #license-other #endpoints_compatible #region-us \n" ]
text-generation
transformers
![image/png](https://huggingface.co/nbeerbower/flammen13X-mistral-7B/resolve/main/flammen13x.png) # flammen23-mistral-7B A Mistral 7B LLM built from merging pretrained models and finetuning on [nbeerbower/bible-dpo](https://huggingface.co/datasets/nbeerbower/bible-dpo). Flammen specializes in exceptional character roleplay, creative writing, and general intelligence ### Method Finetuned using an A100 on Google Colab. [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne) ### Configuration System prompt, dataset formatting: ```python def chatml_format(example): # Format system system = "" systemMessage = "Recite the given verse from the Bible." system = "<|im_start|>system\n" + systemMessage + "<|im_end|>\n" # Format instruction prompt = "<|im_start|>user\nRecite " + example['citation'] + "<|im_end|>\n<|im_start|>assistant\n" # Format chosen answer chosen = example['text'] + "<|im_end|>\n" # Format rejected answer rejected = example['rejected'] + "<|im_end|>\n" return { "prompt": system + prompt, "chosen": chosen, "rejected": rejected, } dataset = load_dataset("nbeerbower/bible-dpo")['train'] # Save columns original_columns = dataset.column_names # Tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "left" # Format dataset dataset = dataset.map( chatml_format, remove_columns=original_columns ) ``` LoRA, model, and training settings: ```python # LoRA configuration peft_config = LoraConfig( r=16, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] ) # Model to fine-tune model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) model.config.use_cache = False # Reference model ref_model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) # Training arguments training_args = TrainingArguments( per_device_train_batch_size=4, gradient_accumulation_steps=4, gradient_checkpointing=True, learning_rate=5e-5, lr_scheduler_type="cosine", max_steps=5000, save_strategy="no", logging_steps=1, output_dir=new_model, optim="paged_adamw_32bit", warmup_steps=100, bf16=True, report_to="wandb", ) # Create DPO trainer dpo_trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, peft_config=peft_config, beta=0.1, max_prompt_length=512, max_length=1536, force_use_ref_model=True ) # Fine-tune model with DPO dpo_trainer.train() ```
{"license": "apache-2.0", "library_name": "transformers", "datasets": ["nbeerbower/bible-dpo"], "base_model": ["flammenai/flammen22X-mistral-7B"]}
flammenai/flammen23-mistral-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "dataset:nbeerbower/bible-dpo", "base_model:flammenai/flammen22X-mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T01:42:29+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #dataset-nbeerbower/bible-dpo #base_model-flammenai/flammen22X-mistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
!image/png # flammen23-mistral-7B A Mistral 7B LLM built from merging pretrained models and finetuning on nbeerbower/bible-dpo. Flammen specializes in exceptional character roleplay, creative writing, and general intelligence ### Method Finetuned using an A100 on Google Colab. Fine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne ### Configuration System prompt, dataset formatting: LoRA, model, and training settings:
[ "# flammen23-mistral-7B\n\nA Mistral 7B LLM built from merging pretrained models and finetuning on nbeerbower/bible-dpo. \nFlammen specializes in exceptional character roleplay, creative writing, and general intelligence", "### Method\n\nFinetuned using an A100 on Google Colab.\n\nFine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne", "### Configuration\n\nSystem prompt, dataset formatting:\n\n\n\nLoRA, model, and training settings:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #dataset-nbeerbower/bible-dpo #base_model-flammenai/flammen22X-mistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# flammen23-mistral-7B\n\nA Mistral 7B LLM built from merging pretrained models and finetuning on nbeerbower/bible-dpo. \nFlammen specializes in exceptional character roleplay, creative writing, and general intelligence", "### Method\n\nFinetuned using an A100 on Google Colab.\n\nFine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne", "### Configuration\n\nSystem prompt, dataset formatting:\n\n\n\nLoRA, model, and training settings:" ]
[ 76, 56, 35, 21 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #dataset-nbeerbower/bible-dpo #base_model-flammenai/flammen22X-mistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# flammen23-mistral-7B\n\nA Mistral 7B LLM built from merging pretrained models and finetuning on nbeerbower/bible-dpo. \nFlammen specializes in exceptional character roleplay, creative writing, and general intelligence### Method\n\nFinetuned using an A100 on Google Colab.\n\nFine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne### Configuration\n\nSystem prompt, dataset formatting:\n\n\n\nLoRA, model, and training settings:" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_InstructionN0_SOAPL_v1 This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_InstructionN0_SOAPL_v1", "results": []}]}
ThuyNT/CS505_COQE_viT5_train_InstructionN0_SOAPL_v1
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T01:45:13+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# CS505_COQE_viT5_train_InstructionN0_SOAPL_v1 This model is a fine-tuned version of VietAI/vit5-large on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# CS505_COQE_viT5_train_InstructionN0_SOAPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# CS505_COQE_viT5_train_InstructionN0_SOAPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ 62, 47, 7, 9, 9, 4, 102, 5, 40 ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# CS505_COQE_viT5_train_InstructionN0_SOAPL_v1\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
unconditional-image-generation
diffusers
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Vincent-Liiiii/sd-class-butterflies-32') image = pipeline().images[0] image ```
{"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]}
Vincent-Liiiii/sd-class-butterflies-32
null
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
null
2024-05-02T01:46:35+00:00
[]
[]
TAGS #diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us
# Model Card for Unit 1 of the Diffusion Models Class This model is a diffusion model for unconditional image generation of cute . ## Usage
[ "# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .", "## Usage" ]
[ "TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n", "# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .", "## Usage" ]
[ 43, 26, 3 ]
[ "TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .## Usage" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
rxm210132/gemma_QLORA_model_emotion_detection
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T01:54:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Mervyn999/mistral-7b-distilabel-mini
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T01:54:43+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flant5-base This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3670671.36 | 0.32 | 250 | nan | | 0.0 | 0.64 | 500 | nan | | 0.0 | 0.96 | 750 | nan | | 0.0 | 1.28 | 1000 | nan | | 0.0 | 1.61 | 1250 | nan | | 0.0 | 1.93 | 1500 | nan | | 0.0 | 2.25 | 1750 | nan | | 0.0 | 2.57 | 2000 | nan | | 0.0 | 2.89 | 2250 | nan | | 0.0 | 3.21 | 2500 | nan | | 0.0 | 3.53 | 2750 | nan | | 0.0 | 3.85 | 3000 | nan | | 0.0 | 4.17 | 3250 | nan | | 0.0 | 4.49 | 3500 | nan | | 0.0 | 4.82 | 3750 | nan | | 0.0 | 5.14 | 4000 | nan | | 0.0 | 5.46 | 4250 | nan | | 0.0 | 5.78 | 4500 | nan | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/flan-t5-base", "model-index": [{"name": "flant5-base", "results": []}]}
tuquyennnn/flant5-base
null
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T01:55:55+00:00
[]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
flant5-base =========== This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: nan Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 6 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.19.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 6", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 6", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
[ 64, 124, 5, 40 ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 6### Training results### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
unconditional-image-generation
diffusers
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('reevewu/sd-class-butterflies-32') image = pipeline().images[0] image ```
{"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]}
reevewu/sd-class-butterflies-32
null
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
null
2024-05-02T02:00:21+00:00
[]
[]
TAGS #diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us
# Model Card for Unit 1 of the Diffusion Models Class This model is a diffusion model for unconditional image generation of cute . ## Usage
[ "# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .", "## Usage" ]
[ "TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n", "# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .", "## Usage" ]
[ 43, 26, 3 ]
[ "TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .## Usage" ]
reinforcement-learning
null
# PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters
{"tags": ["LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "-94.60 +/- 18.22", "name": "mean_reward", "verified": false}]}]}]}
Joalbom14/ppo-LunarLander-v2-CleanRL
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
null
2024-05-02T02:02:02+00:00
[]
[]
TAGS #tensorboard #LunarLander-v2 #ppo #deep-reinforcement-learning #reinforcement-learning #custom-implementation #deep-rl-course #model-index #region-us
# PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters
[ "# PPO Agent Playing LunarLander-v2\r\n\r\n This is a trained model of a PPO agent playing LunarLander-v2.\r\n\r\n # Hyperparameters" ]
[ "TAGS\n#tensorboard #LunarLander-v2 #ppo #deep-reinforcement-learning #reinforcement-learning #custom-implementation #deep-rl-course #model-index #region-us \n", "# PPO Agent Playing LunarLander-v2\r\n\r\n This is a trained model of a PPO agent playing LunarLander-v2.\r\n\r\n # Hyperparameters" ]
[ 42, 32 ]
[ "TAGS\n#tensorboard #LunarLander-v2 #ppo #deep-reinforcement-learning #reinforcement-learning #custom-implementation #deep-rl-course #model-index #region-us \n# PPO Agent Playing LunarLander-v2\r\n\r\n This is a trained model of a PPO agent playing LunarLander-v2.\r\n\r\n # Hyperparameters" ]
unconditional-image-generation
diffusers
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('reevewu/sd-class-butterflies-64') image = pipeline().images[0] image ```
{"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]}
reevewu/sd-class-butterflies-64
null
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
null
2024-05-02T02:02:29+00:00
[]
[]
TAGS #diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us
# Model Card for Unit 1 of the Diffusion Models Class This model is a diffusion model for unconditional image generation of cute . ## Usage
[ "# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .", "## Usage" ]
[ "TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n", "# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .", "## Usage" ]
[ 43, 26, 3 ]
[ "TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .## Usage" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained_falcon_sharded_80_20_HPE This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 200 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "ybelkada/falcon-7b-sharded-bf16", "model-index": [{"name": "trained_falcon_sharded_80_20_HPE", "results": []}]}
sathwik77/trained_falcon_sharded_80_20_HPE
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:ybelkada/falcon-7b-sharded-bf16", "region:us" ]
null
2024-05-02T02:03:45+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-ybelkada/falcon-7b-sharded-bf16 #region-us
# trained_falcon_sharded_80_20_HPE This model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 200 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# trained_falcon_sharded_80_20_HPE\n\nThis model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 200\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-ybelkada/falcon-7b-sharded-bf16 #region-us \n", "# trained_falcon_sharded_80_20_HPE\n\nThis model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 200\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ 48, 46, 7, 9, 9, 4, 133, 5, 55 ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-ybelkada/falcon-7b-sharded-bf16 #region-us \n# trained_falcon_sharded_80_20_HPE\n\nThis model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 200\n- mixed_precision_training: Native AMP### Training results### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
unconditional-image-generation
diffusers
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('VickFan/sd-class-butterflies-32') image = pipeline().images[0] image ```
{"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]}
VickFan/sd-class-butterflies-32
null
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
null
2024-05-02T02:05:46+00:00
[]
[]
TAGS #diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us
# Model Card for Unit 1 of the Diffusion Models Class This model is a diffusion model for unconditional image generation of cute . ## Usage
[ "# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .", "## Usage" ]
[ "TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n", "# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .", "## Usage" ]
[ 43, 26, 3 ]
[ "TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .## Usage" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Mervyn999/mistral-7b-distilabel-mini-SFT
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T02:07:38+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 44, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/chargoddard/llama-2-16b-nastychat <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/llama-2-16b-nastychat-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q2_K.gguf) | Q2_K | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.IQ3_XS.gguf) | IQ3_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.IQ3_S.gguf) | IQ3_S | 7.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q3_K_S.gguf) | Q3_K_S | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.IQ3_M.gguf) | IQ3_M | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q3_K_M.gguf) | Q3_K_M | 8.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q3_K_L.gguf) | Q3_K_L | 8.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.IQ4_XS.gguf) | IQ4_XS | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q4_K_S.gguf) | Q4_K_S | 9.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q4_K_M.gguf) | Q4_K_M | 9.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q5_K_S.gguf) | Q5_K_S | 11.3 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q5_K_M.gguf) | Q5_K_M | 11.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q6_K.gguf) | Q6_K | 13.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q8_0.gguf) | Q8_0 | 17.3 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": ["not-for-all-audiences"], "base_model": "chargoddard/llama-2-16b-nastychat", "quantized_by": "mradermacher"}
mradermacher/llama-2-16b-nastychat-GGUF
null
[ "transformers", "gguf", "not-for-all-audiences", "en", "base_model:chargoddard/llama-2-16b-nastychat", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:08:10+00:00
[]
[ "en" ]
TAGS #transformers #gguf #not-for-all-audiences #en #base_model-chargoddard/llama-2-16b-nastychat #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #not-for-all-audiences #en #base_model-chargoddard/llama-2-16b-nastychat #endpoints_compatible #region-us \n" ]
[ 47 ]
[ "TAGS\n#transformers #gguf #not-for-all-audiences #en #base_model-chargoddard/llama-2-16b-nastychat #endpoints_compatible #region-us \n" ]
null
null
# joeshmoethefunnyone/pythia-410m-Q8_0-GGUF This model was converted to GGUF format from [`EleutherAI/pythia-410m`](https://huggingface.co/EleutherAI/pythia-410m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/EleutherAI/pythia-410m) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo joeshmoethefunnyone/pythia-410m-Q8_0-GGUF --model pythia-410m.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo joeshmoethefunnyone/pythia-410m-Q8_0-GGUF --model pythia-410m.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pythia-410m.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm", "pythia", "llama-cpp", "gguf-my-repo"], "datasets": ["EleutherAI/pile"]}
joeshmoethefunnyone/pythia-410m-Q8_0-GGUF
null
[ "gguf", "pytorch", "causal-lm", "pythia", "llama-cpp", "gguf-my-repo", "en", "dataset:EleutherAI/pile", "license:apache-2.0", "region:us" ]
null
2024-05-02T02:08:35+00:00
[]
[ "en" ]
TAGS #gguf #pytorch #causal-lm #pythia #llama-cpp #gguf-my-repo #en #dataset-EleutherAI/pile #license-apache-2.0 #region-us
# joeshmoethefunnyone/pythia-410m-Q8_0-GGUF This model was converted to GGUF format from 'EleutherAI/pythia-410m' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# joeshmoethefunnyone/pythia-410m-Q8_0-GGUF\nThis model was converted to GGUF format from 'EleutherAI/pythia-410m' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #pytorch #causal-lm #pythia #llama-cpp #gguf-my-repo #en #dataset-EleutherAI/pile #license-apache-2.0 #region-us \n", "# joeshmoethefunnyone/pythia-410m-Q8_0-GGUF\nThis model was converted to GGUF format from 'EleutherAI/pythia-410m' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ 58, 80, 52 ]
[ "TAGS\n#gguf #pytorch #causal-lm #pythia #llama-cpp #gguf-my-repo #en #dataset-EleutherAI/pile #license-apache-2.0 #region-us \n# joeshmoethefunnyone/pythia-410m-Q8_0-GGUF\nThis model was converted to GGUF format from 'EleutherAI/pythia-410m' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
reinforcement-learning
null
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-CartPoleV1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "133.80 +/- 10.01", "name": "mean_reward", "verified": false}]}]}]}
Joalbom14/Reinforce-CartPoleV1
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-05-02T02:08:39+00:00
[]
[]
TAGS #CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing CartPole-v1 This is a trained model of a Reinforce agent playing CartPole-v1 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ 32, 46 ]
[ "TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
null
gpt-neox
# joeshmoethefunnyone/pythia-70m-Q8_0-GGUF This model was converted to GGUF format from [`EleutherAI/pythia-70m`](https://huggingface.co/EleutherAI/pythia-70m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/EleutherAI/pythia-70m) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo joeshmoethefunnyone/pythia-70m-Q8_0-GGUF --model pythia-70m.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo joeshmoethefunnyone/pythia-70m-Q8_0-GGUF --model pythia-70m.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pythia-70m.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "library_name": "gpt-neox", "tags": ["pytorch", "causal-lm", "pythia", "llama-cpp", "gguf-my-repo"], "datasets": ["EleutherAI/pile"]}
joeshmoethefunnyone/pythia-70m-Q8_0-GGUF
null
[ "gpt-neox", "gguf", "pytorch", "causal-lm", "pythia", "llama-cpp", "gguf-my-repo", "en", "dataset:EleutherAI/pile", "license:apache-2.0", "region:us" ]
null
2024-05-02T02:09:06+00:00
[]
[ "en" ]
TAGS #gpt-neox #gguf #pytorch #causal-lm #pythia #llama-cpp #gguf-my-repo #en #dataset-EleutherAI/pile #license-apache-2.0 #region-us
# joeshmoethefunnyone/pythia-70m-Q8_0-GGUF This model was converted to GGUF format from 'EleutherAI/pythia-70m' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# joeshmoethefunnyone/pythia-70m-Q8_0-GGUF\nThis model was converted to GGUF format from 'EleutherAI/pythia-70m' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gpt-neox #gguf #pytorch #causal-lm #pythia #llama-cpp #gguf-my-repo #en #dataset-EleutherAI/pile #license-apache-2.0 #region-us \n", "# joeshmoethefunnyone/pythia-70m-Q8_0-GGUF\nThis model was converted to GGUF format from 'EleutherAI/pythia-70m' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ 64, 80, 52 ]
[ "TAGS\n#gpt-neox #gguf #pytorch #causal-lm #pythia #llama-cpp #gguf-my-repo #en #dataset-EleutherAI/pile #license-apache-2.0 #region-us \n# joeshmoethefunnyone/pythia-70m-Q8_0-GGUF\nThis model was converted to GGUF format from 'EleutherAI/pythia-70m' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
disroom/test_gemma-1.1-7b-it-0.01M
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:11:26+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** jurieyel - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
jurieyel/77hr-tuned-Llama3-8b-4bit-GGUF-q4_K_M
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:14:12+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: jurieyel - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: jurieyel\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: jurieyel\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 61, 81 ]
[ "TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: jurieyel\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: JackFram/llama-68m model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: /data/data/final_set_cleaned/train/ type: sharegpt conversation: chatml - path: /data/data/map_coig_cqia.jsonl type: sharegpt conversation: chatml - path: /data/data/ruozhiba.jsonl type: sharegpt conversation: chatml dataset_prepared_path: last_run_prepared val_set_size: 0 output_dir: ./out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 4 num_epochs: 2 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 0 eval_table_size: saves_per_epoch: 4 debug: deepspeed: deepspeed/zero2.json weight_decay: 0.0 fsdp: fsdp_config: default_system_message: "You are a helpful assistant." special_tokens: eos_token: "<|im_end|>" pad_token: "<|end_of_text|>" tokens: - "<|im_start|>" - "<|im_end|>" ``` </details><br> # data/llama-68m-20240502-0037 This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 6 - gradient_accumulation_steps: 8 - total_train_batch_size: 192 - total_eval_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.0.1+cu118 - Datasets 2.15.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "JackFram/llama-68m", "model-index": [{"name": "data/llama-68m-20240502-0037", "results": []}]}
lu-vae/llama-68m-fft
null
[ "transformers", "pytorch", "llama", "text-generation", "generated_from_trainer", "base_model:JackFram/llama-68m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T02:16:17+00:00
[]
[]
TAGS #transformers #pytorch #llama #text-generation #generated_from_trainer #base_model-JackFram/llama-68m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src="URL alt="Built with Axolotl" width="200" height="32"/> <details><summary>See axolotl config</summary> axolotl version: '0.4.0' </details><br> # data/llama-68m-20240502-0037 This model is a fine-tuned version of JackFram/llama-68m on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 6 - gradient_accumulation_steps: 8 - total_train_batch_size: 192 - total_eval_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.0.1+cu118 - Datasets 2.15.0 - Tokenizers 0.19.1
[ "# data/llama-68m-20240502-0037\n\nThis model is a fine-tuned version of JackFram/llama-68m on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 6\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 192\n- total_eval_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 2", "### Training results", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.0.1+cu118\n- Datasets 2.15.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #pytorch #llama #text-generation #generated_from_trainer #base_model-JackFram/llama-68m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# data/llama-68m-20240502-0037\n\nThis model is a fine-tuned version of JackFram/llama-68m on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 6\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 192\n- total_eval_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 2", "### Training results", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.0.1+cu118\n- Datasets 2.15.0\n- Tokenizers 0.19.1" ]
[ 63, 40, 7, 9, 9, 4, 153, 5, 44 ]
[ "TAGS\n#transformers #pytorch #llama #text-generation #generated_from_trainer #base_model-JackFram/llama-68m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# data/llama-68m-20240502-0037\n\nThis model is a fine-tuned version of JackFram/llama-68m on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 6\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 192\n- total_eval_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 2### Training results### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.0.1+cu118\n- Datasets 2.15.0\n- Tokenizers 0.19.1" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # toxic_generator_model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3880 - Accuracy: 0.4552 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40.0 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "gpt2", "model-index": [{"name": "toxic_generator_model", "results": []}]}
theastronuts/toxic_sentence_generator
null
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T02:16:55+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# toxic_generator_model This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3880 - Accuracy: 0.4552 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40.0 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.19.1
[ "# toxic_generator_model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 3.3880\n- Accuracy: 0.4552", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 40.0", "### Training results", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# toxic_generator_model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 3.3880\n- Accuracy: 0.4552", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 40.0", "### Training results", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.19.1" ]
[ 53, 49, 7, 9, 9, 4, 95, 5, 44 ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# toxic_generator_model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 3.3880\n- Accuracy: 0.4552## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 40.0### Training results### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.19.1" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
astro21/pix2struct-base-coco-f
null
[ "transformers", "safetensors", "pix2struct", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:19:03+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #pix2struct #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #pix2struct #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 44, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #pix2struct #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
null
# Laura - Un sistema neuronal rápido y local de texto a voz que suena genial. ![Laura](Laura.jpeg) Bienvenido al repositorio del modelo de texto a voz Laura, basado en la arquitectura ONNX Runtime. Este modelo ha sido entrenado utilizando Piper, una plataforma de aprendizaje automático de última generación. ## Acerca de Laura Laura es un pequeño modelo de texto a voz diseñado para ofrecer una conversión precisa y natural de texto a voz. Se basa en la arquitectura ONNX Runtime para un rendimiento óptimo y eficiente. El modelo ha sido entrenado usando Piper, asegurando un alto nivel de calidad en la conversión de texto a voz. ## Características - **Arquitectura ONNX Runtime**: Proporciona un rendimiento eficiente y de alta calidad. - **Entrenado con Piper**: Garantiza una conversión de texto a voz precisa y natural. - **Fácil de usar**: Simple de integrar en tus proyectos de texto a voz. ## Cómo probar Laura Puedes probar el modelo Laura accediendo a [tts.hircoir.eu.org](http://tts.hircoir.eu.org). Allí podrás experimentar la calidad de la conversión de texto a voz que ofrece Laura. # Licencia de uso para el modelo de voz Esta licencia establece las condiciones bajo las cuales se permite el uso de este modelo de voz. Al utilizar este modelo de voz, aceptas cumplir con estas condiciones. ## Permisos - **Uso general**: Se permite el uso del modelo de voz para cualquier propósito. - **Crédito**: Dar crédito al creador del modelo es opcional, pero se agradece. ## Restricciones - **Prohibido el uso en servicios de pago**: Está estrictamente prohibido utilizar el modelo de voz en servicios de conversión de texto a voz que sean de pago o en cualquier otro servicio que implique un cobro a los usuarios por el uso de este modelo de voz. - **No se permite el uso comercial**: No está permitido utilizar el modelo de voz de manera comercial si el uso implica un pago de parte de los usuarios finales. ## Propiedad intelectual - Esta licencia no concede ningún derecho de propiedad intelectual sobre el modelo de voz. Todos los derechos de autor y propiedad intelectual sobre el modelo de voz están reservados por el creador. ## Incumplimiento El incumplimiento de estas condiciones puede resultar en la revocación de tu derecho a usar el modelo de voz. ## Contacto Si tienes alguna pregunta o inquietud acerca de esta licencia o su uso, puedes ponerte en contacto con el creador. --- Al usar este modelo de voz, aceptas cumplir con todas las condiciones anteriores. Gracias por respetar estas pautas.
{"language": ["es"], "license": "cc-by-nc-4.0", "tags": ["piper", "onnx", "HirCoir", "es_MX"]}
HirCoir/Piper-TTS-Laura
null
[ "onnx", "piper", "HirCoir", "es_MX", "es", "license:cc-by-nc-4.0", "region:us" ]
null
2024-05-02T02:20:38+00:00
[]
[ "es" ]
TAGS #onnx #piper #HirCoir #es_MX #es #license-cc-by-nc-4.0 #region-us
# Laura - Un sistema neuronal rápido y local de texto a voz que suena genial. !Laura Bienvenido al repositorio del modelo de texto a voz Laura, basado en la arquitectura ONNX Runtime. Este modelo ha sido entrenado utilizando Piper, una plataforma de aprendizaje automático de última generación. ## Acerca de Laura Laura es un pequeño modelo de texto a voz diseñado para ofrecer una conversión precisa y natural de texto a voz. Se basa en la arquitectura ONNX Runtime para un rendimiento óptimo y eficiente. El modelo ha sido entrenado usando Piper, asegurando un alto nivel de calidad en la conversión de texto a voz. ## Características - Arquitectura ONNX Runtime: Proporciona un rendimiento eficiente y de alta calidad. - Entrenado con Piper: Garantiza una conversión de texto a voz precisa y natural. - Fácil de usar: Simple de integrar en tus proyectos de texto a voz. ## Cómo probar Laura Puedes probar el modelo Laura accediendo a URL. Allí podrás experimentar la calidad de la conversión de texto a voz que ofrece Laura. # Licencia de uso para el modelo de voz Esta licencia establece las condiciones bajo las cuales se permite el uso de este modelo de voz. Al utilizar este modelo de voz, aceptas cumplir con estas condiciones. ## Permisos - Uso general: Se permite el uso del modelo de voz para cualquier propósito. - Crédito: Dar crédito al creador del modelo es opcional, pero se agradece. ## Restricciones - Prohibido el uso en servicios de pago: Está estrictamente prohibido utilizar el modelo de voz en servicios de conversión de texto a voz que sean de pago o en cualquier otro servicio que implique un cobro a los usuarios por el uso de este modelo de voz. - No se permite el uso comercial: No está permitido utilizar el modelo de voz de manera comercial si el uso implica un pago de parte de los usuarios finales. ## Propiedad intelectual - Esta licencia no concede ningún derecho de propiedad intelectual sobre el modelo de voz. Todos los derechos de autor y propiedad intelectual sobre el modelo de voz están reservados por el creador. ## Incumplimiento El incumplimiento de estas condiciones puede resultar en la revocación de tu derecho a usar el modelo de voz. ## Contacto Si tienes alguna pregunta o inquietud acerca de esta licencia o su uso, puedes ponerte en contacto con el creador. --- Al usar este modelo de voz, aceptas cumplir con todas las condiciones anteriores. Gracias por respetar estas pautas.
[ "# Laura - Un sistema neuronal rápido y local de texto a voz que suena genial.\n\n!Laura\n\nBienvenido al repositorio del modelo de texto a voz Laura, basado en la arquitectura ONNX Runtime. Este modelo ha sido entrenado utilizando Piper, una plataforma de aprendizaje automático de última generación.", "## Acerca de Laura\n\nLaura es un pequeño modelo de texto a voz diseñado para ofrecer una conversión precisa y natural de texto a voz. Se basa en la arquitectura ONNX Runtime para un rendimiento óptimo y eficiente. El modelo ha sido entrenado usando Piper, asegurando un alto nivel de calidad en la conversión de texto a voz.", "## Características\n\n- Arquitectura ONNX Runtime: Proporciona un rendimiento eficiente y de alta calidad.\n- Entrenado con Piper: Garantiza una conversión de texto a voz precisa y natural.\n- Fácil de usar: Simple de integrar en tus proyectos de texto a voz.", "## Cómo probar Laura\n\nPuedes probar el modelo Laura accediendo a URL. Allí podrás experimentar la calidad de la conversión de texto a voz que ofrece Laura.", "# Licencia de uso para el modelo de voz\n\nEsta licencia establece las condiciones bajo las cuales se permite el uso de este modelo de voz. Al utilizar este modelo de voz, aceptas cumplir con estas condiciones.", "## Permisos\n\n- Uso general: Se permite el uso del modelo de voz para cualquier propósito.\n- Crédito: Dar crédito al creador del modelo es opcional, pero se agradece.", "## Restricciones\n\n- Prohibido el uso en servicios de pago: Está estrictamente prohibido utilizar el modelo de voz en servicios de conversión de texto a voz que sean de pago o en cualquier otro servicio que implique un cobro a los usuarios por el uso de este modelo de voz.\n- No se permite el uso comercial: No está permitido utilizar el modelo de voz de manera comercial si el uso implica un pago de parte de los usuarios finales.", "## Propiedad intelectual\n\n- Esta licencia no concede ningún derecho de propiedad intelectual sobre el modelo de voz. Todos los derechos de autor y propiedad intelectual sobre el modelo de voz están reservados por el creador.", "## Incumplimiento\n\nEl incumplimiento de estas condiciones puede resultar en la revocación de tu derecho a usar el modelo de voz.", "## Contacto\n\nSi tienes alguna pregunta o inquietud acerca de esta licencia o su uso, puedes ponerte en contacto con el creador.\n\n---\n\nAl usar este modelo de voz, aceptas cumplir con todas las condiciones anteriores. Gracias por respetar estas pautas." ]
[ "TAGS\n#onnx #piper #HirCoir #es_MX #es #license-cc-by-nc-4.0 #region-us \n", "# Laura - Un sistema neuronal rápido y local de texto a voz que suena genial.\n\n!Laura\n\nBienvenido al repositorio del modelo de texto a voz Laura, basado en la arquitectura ONNX Runtime. Este modelo ha sido entrenado utilizando Piper, una plataforma de aprendizaje automático de última generación.", "## Acerca de Laura\n\nLaura es un pequeño modelo de texto a voz diseñado para ofrecer una conversión precisa y natural de texto a voz. Se basa en la arquitectura ONNX Runtime para un rendimiento óptimo y eficiente. El modelo ha sido entrenado usando Piper, asegurando un alto nivel de calidad en la conversión de texto a voz.", "## Características\n\n- Arquitectura ONNX Runtime: Proporciona un rendimiento eficiente y de alta calidad.\n- Entrenado con Piper: Garantiza una conversión de texto a voz precisa y natural.\n- Fácil de usar: Simple de integrar en tus proyectos de texto a voz.", "## Cómo probar Laura\n\nPuedes probar el modelo Laura accediendo a URL. Allí podrás experimentar la calidad de la conversión de texto a voz que ofrece Laura.", "# Licencia de uso para el modelo de voz\n\nEsta licencia establece las condiciones bajo las cuales se permite el uso de este modelo de voz. Al utilizar este modelo de voz, aceptas cumplir con estas condiciones.", "## Permisos\n\n- Uso general: Se permite el uso del modelo de voz para cualquier propósito.\n- Crédito: Dar crédito al creador del modelo es opcional, pero se agradece.", "## Restricciones\n\n- Prohibido el uso en servicios de pago: Está estrictamente prohibido utilizar el modelo de voz en servicios de conversión de texto a voz que sean de pago o en cualquier otro servicio que implique un cobro a los usuarios por el uso de este modelo de voz.\n- No se permite el uso comercial: No está permitido utilizar el modelo de voz de manera comercial si el uso implica un pago de parte de los usuarios finales.", "## Propiedad intelectual\n\n- Esta licencia no concede ningún derecho de propiedad intelectual sobre el modelo de voz. Todos los derechos de autor y propiedad intelectual sobre el modelo de voz están reservados por el creador.", "## Incumplimiento\n\nEl incumplimiento de estas condiciones puede resultar en la revocación de tu derecho a usar el modelo de voz.", "## Contacto\n\nSi tienes alguna pregunta o inquietud acerca de esta licencia o su uso, puedes ponerte en contacto con el creador.\n\n---\n\nAl usar este modelo de voz, aceptas cumplir con todas las condiciones anteriores. Gracias por respetar estas pautas." ]
[ 32, 94, 103, 87, 46, 71, 57, 147, 75, 44, 87 ]
[ "TAGS\n#onnx #piper #HirCoir #es_MX #es #license-cc-by-nc-4.0 #region-us \n# Laura - Un sistema neuronal rápido y local de texto a voz que suena genial.\n\n!Laura\n\nBienvenido al repositorio del modelo de texto a voz Laura, basado en la arquitectura ONNX Runtime. Este modelo ha sido entrenado utilizando Piper, una plataforma de aprendizaje automático de última generación.## Acerca de Laura\n\nLaura es un pequeño modelo de texto a voz diseñado para ofrecer una conversión precisa y natural de texto a voz. Se basa en la arquitectura ONNX Runtime para un rendimiento óptimo y eficiente. El modelo ha sido entrenado usando Piper, asegurando un alto nivel de calidad en la conversión de texto a voz.## Características\n\n- Arquitectura ONNX Runtime: Proporciona un rendimiento eficiente y de alta calidad.\n- Entrenado con Piper: Garantiza una conversión de texto a voz precisa y natural.\n- Fácil de usar: Simple de integrar en tus proyectos de texto a voz.## Cómo probar Laura\n\nPuedes probar el modelo Laura accediendo a URL. Allí podrás experimentar la calidad de la conversión de texto a voz que ofrece Laura.# Licencia de uso para el modelo de voz\n\nEsta licencia establece las condiciones bajo las cuales se permite el uso de este modelo de voz. Al utilizar este modelo de voz, aceptas cumplir con estas condiciones.## Permisos\n\n- Uso general: Se permite el uso del modelo de voz para cualquier propósito.\n- Crédito: Dar crédito al creador del modelo es opcional, pero se agradece.## Restricciones\n\n- Prohibido el uso en servicios de pago: Está estrictamente prohibido utilizar el modelo de voz en servicios de conversión de texto a voz que sean de pago o en cualquier otro servicio que implique un cobro a los usuarios por el uso de este modelo de voz.\n- No se permite el uso comercial: No está permitido utilizar el modelo de voz de manera comercial si el uso implica un pago de parte de los usuarios finales.## Propiedad intelectual\n\n- Esta licencia no concede ningún derecho de propiedad intelectual sobre el modelo de voz. Todos los derechos de autor y propiedad intelectual sobre el modelo de voz están reservados por el creador.## Incumplimiento\n\nEl incumplimiento de estas condiciones puede resultar en la revocación de tu derecho a usar el modelo de voz.## Contacto\n\nSi tienes alguna pregunta o inquietud acerca de esta licencia o su uso, puedes ponerte en contacto con el creador.\n\n---\n\nAl usar este modelo de voz, aceptas cumplir con todas las condiciones anteriores. Gracias por respetar estas pautas." ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Luluuu/0502_T5_baseline_checkpoint_1000
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T02:24:10+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 46, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
reinforcement-learning
transformers
# TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="pkbiswas//tmp/tmppe80ooeh/pkbiswas/Phi-3-Detoxified-PPO-LoRa") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("pkbiswas//tmp/tmppe80ooeh/pkbiswas/Phi-3-Detoxified-PPO-LoRa") model = AutoModelForCausalLMWithValueHead.from_pretrained("pkbiswas//tmp/tmppe80ooeh/pkbiswas/Phi-3-Detoxified-PPO-LoRa") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
{"license": "apache-2.0", "tags": ["trl", "ppo", "transformers", "reinforcement-learning"]}
pkbiswas/Phi-3-Detoxified-PPO-LoRa
null
[ "transformers", "pytorch", "safetensors", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:25:31+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #trl #ppo #reinforcement-learning #license-apache-2.0 #endpoints_compatible #region-us
# TRL Model This is a TRL language model that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: You can then generate text as follows: If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
[ "# TRL Model\n\nThis is a TRL language model that has been fine-tuned with reinforcement learning to\n guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.", "## Usage\n\nTo use this model for inference, first install the TRL library:\n\n\n\nYou can then generate text as follows:\n\n\n\nIf you want to use the model for training or to obtain the outputs from the value head, load the model as follows:" ]
[ "TAGS\n#transformers #pytorch #safetensors #trl #ppo #reinforcement-learning #license-apache-2.0 #endpoints_compatible #region-us \n", "# TRL Model\n\nThis is a TRL language model that has been fine-tuned with reinforcement learning to\n guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.", "## Usage\n\nTo use this model for inference, first install the TRL library:\n\n\n\nYou can then generate text as follows:\n\n\n\nIf you want to use the model for training or to obtain the outputs from the value head, load the model as follows:" ]
[ 39, 45, 50 ]
[ "TAGS\n#transformers #pytorch #safetensors #trl #ppo #reinforcement-learning #license-apache-2.0 #endpoints_compatible #region-us \n# TRL Model\n\nThis is a TRL language model that has been fine-tuned with reinforcement learning to\n guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.## Usage\n\nTo use this model for inference, first install the TRL library:\n\n\n\nYou can then generate text as follows:\n\n\n\nIf you want to use the model for training or to obtain the outputs from the value head, load the model as follows:" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "beomi/KoAlpaca-Polyglot-5.8B"}
lee0901/synerEngine_v1
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/KoAlpaca-Polyglot-5.8B", "region:us" ]
null
2024-05-02T02:27:10+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-beomi/KoAlpaca-Polyglot-5.8B #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-beomi/KoAlpaca-Polyglot-5.8B #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ 43, 6, 4, 50, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5, 13 ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-beomi/KoAlpaca-Polyglot-5.8B #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact### Framework versions\n\n- PEFT 0.10.0" ]
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
rubbrband/aingdiffusionXL_v125
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-05-02T02:27:16+00:00
[ "1910.09700" ]
[]
TAGS #diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 39, 6, 4, 76, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tsuneakikato/bert-base-japanese-v3-jnli
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:29:44+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 37, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Azazelle/L3-RP_io-Q6_K-GGUF This model was converted to GGUF format from [`Azazelle/L3-RP_io`](https://huggingface.co/Azazelle/L3-RP_io) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Azazelle/L3-RP_io) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Azazelle/L3-RP_io-Q6_K-GGUF --model l3-rp_io.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Azazelle/L3-RP_io-Q6_K-GGUF --model l3-rp_io.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m l3-rp_io.Q6_K.gguf -n 128 ```
{"license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["ResplendentAI/Aura_Uncensored_l3_8B", "meta-llama/Meta-Llama-3-8B-Instruct", "ResplendentAI/Kei_Llama3_8B", "Undi95/Llama-3-Unholy-8B", "vicgalle/Roleplay-Llama-3-8B"]}
Azazelle/L3-RP_io-Q6_K-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:ResplendentAI/Aura_Uncensored_l3_8B", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:ResplendentAI/Kei_Llama3_8B", "base_model:Undi95/Llama-3-Unholy-8B", "base_model:vicgalle/Roleplay-Llama-3-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:29:54+00:00
[]
[]
TAGS #transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-ResplendentAI/Aura_Uncensored_l3_8B #base_model-meta-llama/Meta-Llama-3-8B-Instruct #base_model-ResplendentAI/Kei_Llama3_8B #base_model-Undi95/Llama-3-Unholy-8B #base_model-vicgalle/Roleplay-Llama-3-8B #license-llama3 #endpoints_compatible #region-us
# Azazelle/L3-RP_io-Q6_K-GGUF This model was converted to GGUF format from 'Azazelle/L3-RP_io' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# Azazelle/L3-RP_io-Q6_K-GGUF\nThis model was converted to GGUF format from 'Azazelle/L3-RP_io' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-ResplendentAI/Aura_Uncensored_l3_8B #base_model-meta-llama/Meta-Llama-3-8B-Instruct #base_model-ResplendentAI/Kei_Llama3_8B #base_model-Undi95/Llama-3-Unholy-8B #base_model-vicgalle/Roleplay-Llama-3-8B #license-llama3 #endpoints_compatible #region-us \n", "# Azazelle/L3-RP_io-Q6_K-GGUF\nThis model was converted to GGUF format from 'Azazelle/L3-RP_io' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ 143, 76, 52 ]
[ "TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-ResplendentAI/Aura_Uncensored_l3_8B #base_model-meta-llama/Meta-Llama-3-8B-Instruct #base_model-ResplendentAI/Kei_Llama3_8B #base_model-Undi95/Llama-3-Unholy-8B #base_model-vicgalle/Roleplay-Llama-3-8B #license-llama3 #endpoints_compatible #region-us \n# Azazelle/L3-RP_io-Q6_K-GGUF\nThis model was converted to GGUF format from 'Azazelle/L3-RP_io' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# Uploaded model - **Developed by:** hamiejuice - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
hamiejuice/annotated-1poc
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:30:16+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: hamiejuice - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: hamiejuice\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: hamiejuice\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 67, 84 ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: hamiejuice\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.5364 - Accuracy: 0.81 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9223 | 1.0 | 113 | 1.8358 | 0.49 | | 1.1658 | 2.0 | 226 | 1.2417 | 0.62 | | 0.9584 | 3.0 | 339 | 1.0307 | 0.71 | | 0.7216 | 4.0 | 452 | 0.8311 | 0.72 | | 0.4906 | 5.0 | 565 | 0.6832 | 0.83 | | 0.4631 | 6.0 | 678 | 0.5772 | 0.84 | | 0.2003 | 7.0 | 791 | 0.5641 | 0.84 | | 0.099 | 8.0 | 904 | 0.5368 | 0.83 | | 0.1449 | 9.0 | 1017 | 0.5270 | 0.82 | | 0.0776 | 10.0 | 1130 | 0.5364 | 0.81 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["marsyas/gtzan"], "metrics": ["accuracy"], "base_model": "ntu-spml/distilhubert", "model-index": [{"name": "distilhubert-finetuned-gtzan", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "GTZAN", "type": "marsyas/gtzan", "config": "all", "split": "train", "args": "all"}, "metrics": [{"type": "accuracy", "value": 0.81, "name": "Accuracy"}]}]}]}
Gunnika/distilhubert-finetuned-gtzan
null
[ "transformers", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:31:49+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #hubert #audio-classification #generated_from_trainer #dataset-marsyas/gtzan #base_model-ntu-spml/distilhubert #license-apache-2.0 #model-index #endpoints_compatible #region-us
distilhubert-finetuned-gtzan ============================ This model is a fine-tuned version of ntu-spml/distilhubert on the GTZAN dataset. It achieves the following results on the evaluation set: * Loss: 0.5364 * Accuracy: 0.81 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.41.0.dev0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #hubert #audio-classification #generated_from_trainer #dataset-marsyas/gtzan #base_model-ntu-spml/distilhubert #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 68, 130, 5, 47 ]
[ "TAGS\n#transformers #tensorboard #safetensors #hubert #audio-classification #generated_from_trainer #dataset-marsyas/gtzan #base_model-ntu-spml/distilhubert #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
object-detection
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
NekoJojo/DETRFaceDetection
null
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:32:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #detr #object-detection #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #detr #object-detection #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 33, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #detr #object-detection #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-generation
transformers
# nbeerbower/flammen23-mistral-7B AWQ - Model creator: [nbeerbower](https://huggingface.co/nbeerbower) - Original model: [flammen23-mistral-7B](https://huggingface.co/nbeerbower/flammen23-mistral-7B) ![image/png](https://huggingface.co/nbeerbower/flammen13X-mistral-7B/resolve/main/flammen13x.png) ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/flammen23-mistral-7B-AWQ" system_message = "You are flammen23-mistral-7B, incarnated as a powerful AI. You were created by nbeerbower." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
{"license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "datasets": ["nbeerbower/bible-dpo"], "base_model": ["flammenai/flammen22X-mistral-7B"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
solidrust/flammen23-mistral-7B-AWQ
null
[ "transformers", "safetensors", "mistral", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "dataset:nbeerbower/bible-dpo", "base_model:flammenai/flammen22X-mistral-7B", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
2024-05-02T02:34:34+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #dataset-nbeerbower/bible-dpo #base_model-flammenai/flammen22X-mistral-7B #license-apache-2.0 #text-generation-inference #region-us
# nbeerbower/flammen23-mistral-7B AWQ - Model creator: nbeerbower - Original model: flammen23-mistral-7B !image/png ## How to use ### Install the necessary packages ### Example Python code ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - Text Generation Webui - using Loader: AutoAWQ - vLLM - version 0.2.2 or later for support for all model types. - Hugging Face Text Generation Inference (TGI) - Transformers version 4.35.0 and later, from any code or client that supports Transformers - AutoAWQ - for use from Python code
[ "# nbeerbower/flammen23-mistral-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: flammen23-mistral-7B\n\n!image/png", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #dataset-nbeerbower/bible-dpo #base_model-flammenai/flammen22X-mistral-7B #license-apache-2.0 #text-generation-inference #region-us \n", "# nbeerbower/flammen23-mistral-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: flammen23-mistral-7B\n\n!image/png", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
[ 83, 45, 5, 7, 6, 172 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #dataset-nbeerbower/bible-dpo #base_model-flammenai/flammen22X-mistral-7B #license-apache-2.0 #text-generation-inference #region-us \n# nbeerbower/flammen23-mistral-7B AWQ\n\n- Model creator: nbeerbower\n- Original model: flammen23-mistral-7B\n\n!image/png## How to use### Install the necessary packages### Example Python code### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/TeeZee/GALAXY-XB-v.03 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q2_K.gguf) | Q2_K | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.IQ3_XS.gguf) | IQ3_XS | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q3_K_S.gguf) | Q3_K_S | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.IQ3_S.gguf) | IQ3_S | 7.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.IQ3_M.gguf) | IQ3_M | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q3_K_M.gguf) | Q3_K_M | 7.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q3_K_L.gguf) | Q3_K_L | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.IQ4_XS.gguf) | IQ4_XS | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q4_K_S.gguf) | Q4_K_S | 9.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q4_K_M.gguf) | Q4_K_M | 9.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q5_K_S.gguf) | Q5_K_S | 11.1 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q5_K_M.gguf) | Q5_K_M | 11.4 | | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q6_K.gguf) | Q6_K | 13.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/GALAXY-XB-v.03-GGUF/resolve/main/GALAXY-XB-v.03.Q8_0.gguf) | Q8_0 | 17.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "TeeZee/GALAXY-XB-v.03", "no_imatrix": "nan-abort", "quantized_by": "mradermacher"}
mradermacher/GALAXY-XB-v.03-GGUF
null
[ "transformers", "gguf", "en", "base_model:TeeZee/GALAXY-XB-v.03", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:36:00+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-TeeZee/GALAXY-XB-v.03 #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-TeeZee/GALAXY-XB-v.03 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
[ 42 ]
[ "TAGS\n#transformers #gguf #en #base_model-TeeZee/GALAXY-XB-v.03 #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lsr42/epic
null
[ "transformers", "safetensors", "EPIC", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:36:36+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #EPIC #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #EPIC #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 28, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #EPIC #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
unconditional-image-generation
diffusers
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('ericpotter/sd-class-butterflies-32') image = pipeline().images[0] image ```
{"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]}
ericpotter/sd-class-butterflies-32
null
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
null
2024-05-02T02:37:14+00:00
[]
[]
TAGS #diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us
# Model Card for Unit 1 of the Diffusion Models Class This model is a diffusion model for unconditional image generation of cute . ## Usage
[ "# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .", "## Usage" ]
[ "TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n", "# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .", "## Usage" ]
[ 43, 26, 3 ]
[ "TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .## Usage" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cody628/t5-large_PREFIX_TUNING_SEQ2SEQ
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:37:50+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
vagarwal97/Mistral-7B-Instruct-FT-YTcomments
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:38:05+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # shawgpt-ft This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8743 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.5927 | 0.9231 | 3 | 3.9598 | | 4.0453 | 1.8462 | 6 | 3.4396 | | 3.4729 | 2.7692 | 9 | 2.9900 | | 2.2603 | 4.0 | 13 | 2.5591 | | 2.6738 | 4.9231 | 16 | 2.3232 | | 2.3752 | 5.8462 | 19 | 2.1372 | | 2.1629 | 6.7692 | 22 | 2.0113 | | 1.5507 | 8.0 | 26 | 1.9372 | | 1.9905 | 8.9231 | 29 | 1.8902 | | 1.3794 | 9.2308 | 30 | 1.8743 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "shawgpt-ft", "results": []}]}
vagarwal97/shawgpt-ft
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-05-02T02:38:07+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us
shawgpt-ft ========== This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.8743 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2 * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.1 * Pytorch 2.1.0+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 55, 151, 5, 52 ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Uploaded model - **Developed by:** Lawlux - **License:** apache-2.0 - **Finetuned from model :** meta-llama/Meta-Llama-3-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"}
Lawlux/Lixo-Meta-Llama-3-8B-Instruct
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:38:39+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: Lawlux - License: apache-2.0 - Finetuned from model : meta-llama/Meta-Llama-3-8B-Instruct This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: Lawlux\n- License: apache-2.0\n- Finetuned from model : meta-llama/Meta-Llama-3-8B-Instruct\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: Lawlux\n- License: apache-2.0\n- Finetuned from model : meta-llama/Meta-Llama-3-8B-Instruct\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 76, 79 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: Lawlux\n- License: apache-2.0\n- Finetuned from model : meta-llama/Meta-Llama-3-8B-Instruct\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lsr42/sparta
null
[ "transformers", "safetensors", "SPARTA", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:40:00+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #SPARTA #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #SPARTA #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 28, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #SPARTA #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-generation
null
## Exllama v2 Quantizations of Hermes-2-Pro-Llama-3-8B Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turboderp's ExLlamaV2 v0.0.20</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B ## Prompt format ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-exl2 Hermes-2-Pro-Llama-3-8B-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/Hermes-2-Pro-Llama-3-8B-exl2 --revision 6_5 --local-dir Hermes-2-Pro-Llama-3-8B-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/Hermes-2-Pro-Llama-3-8B-exl2 --revision 6_5 --local-dir Hermes-2-Pro-Llama-3-8B-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "NousResearch/Meta-Llama-3-8B", "widget": [{"example_title": "Hermes 2 Pro", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "quantized_by": "bartowski", "pipeline_tag": "text-generation", "model-index": [{"name": "Hermes-2-Pro-Llama-3-8B", "results": []}]}
bartowski/Hermes-2-Pro-Llama-3-8B-exl2
null
[ "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "text-generation", "en", "dataset:teknium/OpenHermes-2.5", "base_model:NousResearch/Meta-Llama-3-8B", "license:apache-2.0", "region:us" ]
null
2024-05-02T02:44:45+00:00
[]
[ "en" ]
TAGS #Llama-3 #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #function calling #json mode #axolotl #text-generation #en #dataset-teknium/OpenHermes-2.5 #base_model-NousResearch/Meta-Llama-3-8B #license-apache-2.0 #region-us
Exllama v2 Quantizations of Hermes-2-Pro-Llama-3-8B --------------------------------------------------- Using <a href="URL ExLlamaV2 v0.0.20 for quantization. **The "main" branch only contains the URL, download one of the other branches for the model (see below)** Each branch contains an individual bits per weight, with the main one containing only the URL for further conversions. Original model: URL Prompt format ------------- Available sizes --------------- Download instructions --------------------- With git: With huggingface hub (credit to TheBloke for instructions): To download a specific branch, use the '--revision' parameter. For example, to download the 6.5 bpw branch: Linux: Windows (which apparently doesn't like \_ in folders sometimes?): Want to support my work? Visit my ko-fi page here: URL
[]
[ "TAGS\n#Llama-3 #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #function calling #json mode #axolotl #text-generation #en #dataset-teknium/OpenHermes-2.5 #base_model-NousResearch/Meta-Llama-3-8B #license-apache-2.0 #region-us \n" ]
[ 99 ]
[ "TAGS\n#Llama-3 #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #function calling #json mode #axolotl #text-generation #en #dataset-teknium/OpenHermes-2.5 #base_model-NousResearch/Meta-Llama-3-8B #license-apache-2.0 #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lsr42/splade
null
[ "transformers", "safetensors", "SPLADE", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:46:01+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #SPLADE #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #SPLADE #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 30, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #SPLADE #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** Samoed - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit"}
Samoed/PRGen-Phi-3-mini-4k-instruct-4bit-LoRA
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:46:30+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: Samoed - License: apache-2.0 - Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: Samoed\n- License: apache-2.0\n- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: Samoed\n- License: apache-2.0\n- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 68, 83 ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: Samoed\n- License: apache-2.0\n- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
Quantizations of https://huggingface.co/HuggingFaceH4/zephyr-7b-beta # From original readme ## Model description - **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English - **License:** MIT - **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ...
{"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "zephyr-7b-beta"], "pipeline_tag": "text-generation", "inference": false}
duyntnet/zephyr-7b-beta-imatrix-GGUF
null
[ "transformers", "gguf", "imatrix", "zephyr-7b-beta", "text-generation", "en", "license:other", "region:us" ]
null
2024-05-02T02:49:23+00:00
[]
[ "en" ]
TAGS #transformers #gguf #imatrix #zephyr-7b-beta #text-generation #en #license-other #region-us
Quantizations of URL # From original readme ## Model description - Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - Language(s) (NLP): Primarily English - License: MIT - Finetuned from model: mistralai/Mistral-7B-v0.1 ...
[ "# From original readme", "## Model description\n\n- Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.\n- Language(s) (NLP): Primarily English\n- License: MIT\n- Finetuned from model: mistralai/Mistral-7B-v0.1\n\n..." ]
[ "TAGS\n#transformers #gguf #imatrix #zephyr-7b-beta #text-generation #en #license-other #region-us \n", "# From original readme", "## Model description\n\n- Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.\n- Language(s) (NLP): Primarily English\n- License: MIT\n- Finetuned from model: mistralai/Mistral-7B-v0.1\n\n..." ]
[ 34, 5, 72 ]
[ "TAGS\n#transformers #gguf #imatrix #zephyr-7b-beta #text-generation #en #license-other #region-us \n# From original readme## Model description\n\n- Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.\n- Language(s) (NLP): Primarily English\n- License: MIT\n- Finetuned from model: mistralai/Mistral-7B-v0.1\n\n..." ]
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
rubbrband/sdxlHK_v097bCOSXL
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-05-02T02:49:52+00:00
[ "1910.09700" ]
[]
TAGS #diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 39, 6, 4, 76, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GOLM3 This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1033 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8196 | 0.09 | 10 | 1.4618 | | 1.0741 | 0.18 | 20 | 0.5866 | | 0.3626 | 0.27 | 30 | 0.1793 | | 0.1587 | 0.36 | 40 | 0.1503 | | 0.1454 | 0.45 | 50 | 0.1476 | | 0.1461 | 0.54 | 60 | 0.1476 | | 0.143 | 0.63 | 70 | 0.1443 | | 0.1452 | 0.73 | 80 | 0.1424 | | 0.1345 | 0.82 | 90 | 0.1372 | | 0.1286 | 0.91 | 100 | 0.1308 | | 0.131 | 1.0 | 110 | 0.1317 | | 0.123 | 1.09 | 120 | 0.1265 | | 0.1161 | 1.18 | 130 | 0.1272 | | 0.1188 | 1.27 | 140 | 0.1170 | | 0.1197 | 1.36 | 150 | 0.1149 | | 0.1139 | 1.45 | 160 | 0.1146 | | 0.1122 | 1.54 | 170 | 0.1117 | | 0.1093 | 1.63 | 180 | 0.1098 | | 0.1093 | 1.72 | 190 | 0.1085 | | 0.1096 | 1.81 | 200 | 0.1068 | | 0.1086 | 1.9 | 210 | 0.1064 | | 0.107 | 1.99 | 220 | 0.1065 | | 0.0961 | 2.08 | 230 | 0.1069 | | 0.0946 | 2.18 | 240 | 0.1061 | | 0.0883 | 2.27 | 250 | 0.1048 | | 0.0933 | 2.36 | 260 | 0.1039 | | 0.0904 | 2.45 | 270 | 0.1058 | | 0.0862 | 2.54 | 280 | 0.1057 | | 0.0818 | 2.63 | 290 | 0.1047 | | 0.0852 | 2.72 | 300 | 0.1046 | | 0.0895 | 2.81 | 310 | 0.1048 | | 0.0914 | 2.9 | 320 | 0.1044 | | 0.0909 | 2.99 | 330 | 0.1033 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "GOLM3", "results": []}]}
Litzy619/GOLM3
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-02T02:50:12+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us
GOLM3 ===== This model is a fine-tuned version of google/gemma-2b on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1033 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 80 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ 30, 160, 5, 47 ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GOLM2 This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1067 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7996 | 0.09 | 10 | 1.4594 | | 0.9949 | 0.18 | 20 | 0.5804 | | 0.3011 | 0.27 | 30 | 0.1728 | | 0.1527 | 0.36 | 40 | 0.1498 | | 0.1447 | 0.45 | 50 | 0.1491 | | 0.1451 | 0.54 | 60 | 0.1476 | | 0.142 | 0.63 | 70 | 0.1447 | | 0.1422 | 0.73 | 80 | 0.1415 | | 0.1304 | 0.82 | 90 | 0.1346 | | 0.1241 | 0.91 | 100 | 0.1294 | | 0.1263 | 1.0 | 110 | 0.1294 | | 0.1163 | 1.09 | 120 | 0.1235 | | 0.1091 | 1.18 | 130 | 0.1217 | | 0.1143 | 1.27 | 140 | 0.1201 | | 0.1131 | 1.36 | 150 | 0.1149 | | 0.1127 | 1.45 | 160 | 0.1158 | | 0.1087 | 1.54 | 170 | 0.1139 | | 0.1086 | 1.63 | 180 | 0.1125 | | 0.1069 | 1.72 | 190 | 0.1120 | | 0.1027 | 1.81 | 200 | 0.1119 | | 0.1037 | 1.9 | 210 | 0.1109 | | 0.1072 | 1.99 | 220 | 0.1116 | | 0.0896 | 2.08 | 230 | 0.1104 | | 0.0918 | 2.18 | 240 | 0.1096 | | 0.0828 | 2.27 | 250 | 0.1071 | | 0.0861 | 2.36 | 260 | 0.1080 | | 0.0853 | 2.45 | 270 | 0.1093 | | 0.0809 | 2.54 | 280 | 0.1084 | | 0.0782 | 2.63 | 290 | 0.1076 | | 0.0814 | 2.72 | 300 | 0.1072 | | 0.0849 | 2.81 | 310 | 0.1074 | | 0.0838 | 2.9 | 320 | 0.1070 | | 0.0864 | 2.99 | 330 | 0.1067 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "GOLM2", "results": []}]}
Litzy619/GOLM2
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-02T02:50:12+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us
GOLM2 ===== This model is a fine-tuned version of google/gemma-2b on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1067 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 80 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ 30, 160, 5, 47 ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ikno/rinko_300_labeling_lr_1e-5
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:50:14+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GOLM1 This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1016 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 80 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7996 | 0.09 | 10 | 1.4084 | | 0.9949 | 0.18 | 20 | 0.5027 | | 0.3011 | 0.27 | 30 | 0.1578 | | 0.1527 | 0.36 | 40 | 0.1481 | | 0.1447 | 0.45 | 50 | 0.1469 | | 0.1451 | 0.54 | 60 | 0.1464 | | 0.142 | 0.63 | 70 | 0.1422 | | 0.1422 | 0.73 | 80 | 0.1372 | | 0.1304 | 0.82 | 90 | 0.1289 | | 0.1241 | 0.91 | 100 | 0.1269 | | 0.1263 | 1.0 | 110 | 0.1302 | | 0.1163 | 1.09 | 120 | 0.1185 | | 0.1091 | 1.18 | 130 | 0.1211 | | 0.1143 | 1.27 | 140 | 0.1143 | | 0.1131 | 1.36 | 150 | 0.1113 | | 0.1127 | 1.45 | 160 | 0.1115 | | 0.1087 | 1.54 | 170 | 0.1073 | | 0.1086 | 1.63 | 180 | 0.1064 | | 0.1069 | 1.72 | 190 | 0.1053 | | 0.1027 | 1.81 | 200 | 0.1047 | | 0.1037 | 1.9 | 210 | 0.1022 | | 0.1072 | 1.99 | 220 | 0.1029 | | 0.0896 | 2.08 | 230 | 0.1056 | | 0.0918 | 2.18 | 240 | 0.1024 | | 0.0828 | 2.27 | 250 | 0.1026 | | 0.0861 | 2.36 | 260 | 0.1022 | | 0.0853 | 2.45 | 270 | 0.1049 | | 0.0809 | 2.54 | 280 | 0.1028 | | 0.0782 | 2.63 | 290 | 0.1021 | | 0.0814 | 2.72 | 300 | 0.1021 | | 0.0849 | 2.81 | 310 | 0.1019 | | 0.0838 | 2.9 | 320 | 0.1016 | | 0.0864 | 2.99 | 330 | 0.1016 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "GOLM1", "results": []}]}
Litzy619/GOLM1
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-02T02:51:09+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us
GOLM1 ===== This model is a fine-tuned version of google/gemma-2b on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1016 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 80 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ 30, 160, 5, 47 ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 80\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Chilean Spanish Large v3 This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Mezosky/es_clinical_assistance_10k dataset. It achieves the following results on the evaluation set: - Loss: 0.0961 - Wer: 6.9352 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2816 | 0.17 | 100 | 0.2250 | 11.2827 | | 0.1505 | 0.34 | 200 | 0.1479 | 9.8196 | | 0.1293 | 0.51 | 300 | 0.1350 | 72.1192 | | 0.1221 | 0.69 | 400 | 0.1292 | 9.6825 | | 0.141 | 0.86 | 500 | 0.1194 | 53.0899 | | 0.0922 | 1.03 | 600 | 0.1150 | 12.0380 | | 0.0773 | 1.2 | 700 | 0.1079 | 12.8661 | | 0.0745 | 1.37 | 800 | 0.1036 | 67.3017 | | 0.0699 | 1.54 | 900 | 0.1016 | 8.2697 | | 0.0917 | 1.72 | 1000 | 0.0956 | 8.6334 | | 0.0716 | 1.89 | 1100 | 0.0968 | 7.7997 | | 0.0441 | 2.06 | 1200 | 0.0946 | 8.3760 | | 0.0377 | 2.23 | 1300 | 0.0963 | 7.6178 | | 0.0417 | 2.4 | 1400 | 0.0951 | 7.5703 | | 0.0409 | 2.57 | 1500 | 0.0926 | 7.2681 | | 0.0356 | 2.74 | 1600 | 0.0912 | 6.8933 | | 0.0361 | 2.92 | 1700 | 0.0918 | 7.0835 | | 0.0215 | 3.09 | 1800 | 0.0938 | 6.9548 | | 0.018 | 3.26 | 1900 | 0.0960 | 6.6415 | | 0.0196 | 3.43 | 2000 | 0.0961 | 6.9352 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["es"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["Mezosky/es_clinical_assistance_10k"], "metrics": ["wer"], "base_model": "openai/whisper-large-v3", "model-index": [{"name": "Whisper Chilean Spanish Large v3", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Mezosky/es_clinical_assistance_10k", "type": "Mezosky/es_clinical_assistance_10k"}, "metrics": [{"type": "wer", "value": 6.935235697300322, "name": "Wer"}]}]}]}
Mezosky/whisper-large-v3-clinical-assistance
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "es", "dataset:Mezosky/es_clinical_assistance_10k", "base_model:openai/whisper-large-v3", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:51:32+00:00
[]
[ "es" ]
TAGS #transformers #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #es #dataset-Mezosky/es_clinical_assistance_10k #base_model-openai/whisper-large-v3 #license-apache-2.0 #model-index #endpoints_compatible #region-us
Whisper Chilean Spanish Large v3 ================================ This model is a fine-tuned version of openai/whisper-large-v3 on the Mezosky/es\_clinical\_assistance\_10k dataset. It achieves the following results on the evaluation set: * Loss: 0.0961 * Wer: 6.9352 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 2000 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 2000", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #es #dataset-Mezosky/es_clinical_assistance_10k #base_model-openai/whisper-large-v3 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 2000", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ 74, 115, 5, 44 ]
[ "TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #es #dataset-Mezosky/es_clinical_assistance_10k #base_model-openai/whisper-large-v3 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 2000### Training results### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Syed-Hasan-8503/Versatile-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Versatile-7B-GGUF/resolve/main/Versatile-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "Syed-Hasan-8503/Versatile-7B", "quantized_by": "mradermacher"}
mradermacher/Versatile-7B-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Syed-Hasan-8503/Versatile-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:54:06+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mergekit #merge #en #base_model-Syed-Hasan-8503/Versatile-7B #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #mergekit #merge #en #base_model-Syed-Hasan-8503/Versatile-7B #license-apache-2.0 #endpoints_compatible #region-us \n" ]
[ 47 ]
[ "TAGS\n#transformers #gguf #mergekit #merge #en #base_model-Syed-Hasan-8503/Versatile-7B #license-apache-2.0 #endpoints_compatible #region-us \n" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper4 This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on the tiny dataset. It achieves the following results on the evaluation set: - Loss: 0.5409 - Wer: 28.2719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 3.8231 | 0.2778 | 10 | 3.7088 | 76.9377 | | 3.1925 | 0.5556 | 20 | 2.9439 | 65.5654 | | 2.1383 | 0.8333 | 30 | 1.7221 | 61.5311 | | 1.0671 | 1.1111 | 40 | 0.8320 | 50.6989 | | 0.6947 | 1.3889 | 50 | 0.6587 | 41.0102 | | 0.6263 | 1.6667 | 60 | 0.5874 | 29.7967 | | 0.5827 | 1.9444 | 70 | 0.5402 | 27.3825 | | 0.4222 | 2.2222 | 80 | 0.5154 | 32.0521 | | 0.4065 | 2.5 | 90 | 0.4997 | 25.6989 | | 0.3959 | 2.7778 | 100 | 0.4804 | 23.8247 | | 0.3081 | 3.0556 | 110 | 0.4670 | 24.8412 | | 0.2497 | 3.3333 | 120 | 0.4687 | 23.2846 | | 0.2535 | 3.6111 | 130 | 0.4594 | 23.0940 | | 0.2428 | 3.8889 | 140 | 0.4545 | 23.5070 | | 0.1627 | 4.1667 | 150 | 0.4651 | 24.4917 | | 0.1224 | 4.4444 | 160 | 0.4686 | 23.6976 | | 0.1326 | 4.7222 | 170 | 0.4653 | 23.6976 | | 0.1334 | 5.0 | 180 | 0.4741 | 24.7459 | | 0.0659 | 5.2778 | 190 | 0.4792 | 24.6823 | | 0.0639 | 5.5556 | 200 | 0.4760 | 33.3863 | | 0.0667 | 5.8333 | 210 | 0.4820 | 25.4765 | | 0.042 | 6.1111 | 220 | 0.4933 | 29.4155 | | 0.0325 | 6.3889 | 230 | 0.5066 | 29.9873 | | 0.0333 | 6.6667 | 240 | 0.5126 | 26.0801 | | 0.0333 | 6.9444 | 250 | 0.5073 | 24.6188 | | 0.0187 | 7.2222 | 260 | 0.5129 | 27.3507 | | 0.0214 | 7.5 | 270 | 0.5209 | 28.2084 | | 0.0187 | 7.7778 | 280 | 0.5213 | 29.3202 | | 0.0312 | 8.0556 | 290 | 0.5274 | 34.6569 | | 0.0172 | 8.3333 | 300 | 0.5409 | 28.2719 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.1.dev0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-tiny.en", "model-index": [{"name": "whisper4", "results": []}]}
khaingsmon/whisper4
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny.en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:54:26+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-tiny.en #license-apache-2.0 #endpoints_compatible #region-us
whisper4 ======== This model is a fine-tuned version of openai/URL on the tiny dataset. It achieves the following results on the evaluation set: * Loss: 0.5409 * Wer: 28.2719 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 128 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 300 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.1.dev0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 300", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.1.dev0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-tiny.en #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 300", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.1.dev0\n* Tokenizers 0.19.1" ]
[ 54, 115, 5, 47 ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-tiny.en #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 300### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.1.dev0\n* Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
vkrishanan569/tinyllamaforlaw
null
[ "transformers", "pytorch", "gguf", "llama", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T02:55:12+00:00
[ "1910.09700" ]
[]
TAGS #transformers #pytorch #gguf #llama #unsloth #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #pytorch #gguf #llama #unsloth #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 44, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #pytorch #gguf #llama #unsloth #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** xkiwilabs - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
xkiwilabs/lora_opLLama3_modelv7
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:55:36+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: xkiwilabs - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: xkiwilabs\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: xkiwilabs\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 64, 82 ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: xkiwilabs\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-to-audio
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ceb_b128_le5_s4000 This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3943 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:--------:|:----:|:---------------:| | 0.5272 | 39.2157 | 500 | 0.4584 | | 0.4634 | 78.4314 | 1000 | 0.4220 | | 0.4466 | 117.6471 | 1500 | 0.4104 | | 0.4289 | 156.8627 | 2000 | 0.4017 | | 0.4223 | 196.0784 | 2500 | 0.3969 | | 0.4149 | 235.2941 | 3000 | 0.3960 | | 0.4129 | 274.5098 | 3500 | 0.3962 | | 0.4108 | 313.7255 | 4000 | 0.3943 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "ceb_b128_le5_s4000", "results": []}]}
mikhail-panzo/ceb_b128_le5_s4000
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:56:37+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us
ceb\_b128\_le5\_s4000 ===================== This model is a fine-tuned version of microsoft/speecht5\_tts on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.3943 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * training\_steps: 4000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.41.0.dev0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 52, 149, 5, 47 ]
[ "TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-16bit-v0.2
null
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:58:48+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: theGhoul21 - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 82, 88 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
fill-mask
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
wuzhongyanqiu/dummy-model
null
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:59:23+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #camembert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #camembert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 39, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #camembert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # miniCPM-2b-chatml This model is a fine-tuned version of [openbmb/MiniCPM-2B-128k](https://huggingface.co/openbmb/MiniCPM-2B-128k) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.2 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "openbmb/MiniCPM-2B-128k", "model-index": [{"name": "miniCPM-2b-chatml", "results": []}]}
DuongTrongChi/miniCPM-2b-chatml
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:openbmb/MiniCPM-2B-128k", "region:us" ]
null
2024-05-02T02:59:31+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-openbmb/MiniCPM-2B-128k #region-us
# miniCPM-2b-chatml This model is a fine-tuned version of openbmb/MiniCPM-2B-128k on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.2 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
[ "# miniCPM-2b-chatml\n\nThis model is a fine-tuned version of openbmb/MiniCPM-2B-128k on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-openbmb/MiniCPM-2B-128k #region-us \n", "# miniCPM-2b-chatml\n\nThis model is a fine-tuned version of openbmb/MiniCPM-2B-128k on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
[ 50, 38, 7, 9, 9, 4, 126, 5, 52 ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-openbmb/MiniCPM-2B-128k #region-us \n# miniCPM-2b-chatml\n\nThis model is a fine-tuned version of openbmb/MiniCPM-2B-128k on the generator dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1### Training results### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
DuongTrongChi/minicpm-test
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T02:59:41+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 22, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Weyaxi/Bagel-Hermes-34B-Slerp <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q2_K.gguf) | Q2_K | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.IQ3_XS.gguf) | IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q3_K_S.gguf) | Q3_K_S | 15.1 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.IQ3_M.gguf) | IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q3_K_L.gguf) | Q3_K_L | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.IQ4_XS.gguf) | IQ4_XS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q5_K_S.gguf) | Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q5_K_M.gguf) | Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q6_K.gguf) | Q6_K | 28.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-34B-Slerp-GGUF/resolve/main/Bagel-Hermes-34B-Slerp.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "Weyaxi/Bagel-Hermes-34B-Slerp", "license_link": "https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE", "license_name": "yi-license", "quantized_by": "mradermacher"}
mradermacher/Bagel-Hermes-34B-Slerp-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Weyaxi/Bagel-Hermes-34B-Slerp", "license:other", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:02:01+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mergekit #merge #en #base_model-Weyaxi/Bagel-Hermes-34B-Slerp #license-other #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #mergekit #merge #en #base_model-Weyaxi/Bagel-Hermes-34B-Slerp #license-other #endpoints_compatible #region-us \n" ]
[ 47 ]
[ "TAGS\n#transformers #gguf #mergekit #merge #en #base_model-Weyaxi/Bagel-Hermes-34B-Slerp #license-other #endpoints_compatible #region-us \n" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0001_withdpo_4iters_bs256_5101lr_iter_4 This model is a fine-tuned version of [ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3](https://huggingface.co/ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-08 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3", "model-index": [{"name": "0.0001_withdpo_4iters_bs256_5101lr_iter_4", "results": []}]}
ShenaoZ/0.0001_withdpo_4iters_bs256_5101lr_iter_4
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T03:02:08+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.0001_withdpo_4iters_bs256_5101lr_iter_4 This model is a fine-tuned version of ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-08 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.0001_withdpo_4iters_bs256_5101lr_iter_4\n\nThis model is a fine-tuned version of ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-08\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.0001_withdpo_4iters_bs256_5101lr_iter_4\n\nThis model is a fine-tuned version of ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-08\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ 101, 74, 7, 9, 9, 4, 155, 5, 44 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# 0.0001_withdpo_4iters_bs256_5101lr_iter_4\n\nThis model is a fine-tuned version of ShenaoZ/0.0001_withdpo_4iters_bs256_511lr_iter_3 on the updated and the original datasets.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-08\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
text-generation
transformers
I'm back and doing well! I've got a job in the field now, so we'll see in the long run how that effects my open source output. Here we have a 11b Llama 3 instruct model for future work. EDIT: Made a yaml mistake with part funnel, but it still works well. --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/633a809fa4a8f33508dce32c/jJxgpSwdSal2XWsJ0KlG8.png) This is a merge stock of 3 models: - Part Wave - Part Block - Part Funnel With Part Funnel as the base. --- Part Wave: - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [0, 12] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [8, 18] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [13, 23] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [18, 32] --- Part Block: - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [0, 15] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [8, 23] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [16, 32] --- Part Funnel: - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [0, 15] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [14, 14] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [13, 13] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [12, 12] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [11, 11] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [10, 10] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [9, 9] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [8, 23] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [22, 22] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [21, 21] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [20, 20] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [19, 19] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [18, 18] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [17, 17] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [16, 32]
{"license": "llama3"}
athirdpath/Llama-3-11b-Instruct
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T03:04:58+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
I'm back and doing well! I've got a job in the field now, so we'll see in the long run how that effects my open source output. Here we have a 11b Llama 3 instruct model for future work. EDIT: Made a yaml mistake with part funnel, but it still works well. --- !image/png This is a merge stock of 3 models: - Part Wave - Part Block - Part Funnel With Part Funnel as the base. --- Part Wave: - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [0, 12] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [8, 18] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [13, 23] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [18, 32] --- Part Block: - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [0, 15] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [8, 23] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [16, 32] --- Part Funnel: - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [0, 15] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [14, 14] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [13, 13] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [12, 12] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [11, 11] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [10, 10] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [9, 9] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [8, 23] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [22, 22] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [21, 21] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [20, 20] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [19, 19] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [18, 18] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [17, 17] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [16, 32]
[]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 43 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-classification
transformers
We have finetuned BERT base on Brown corpus to predict if a text was written using a non-fiction straight forward style or a fictional descriptive style. Here fiction and non-fiction only refers to the writing style and not to factual correctness of whats written in the text. For more details, please see this blog and the referenced paper: https://bekushal.medium.com/fictometer-a-simple-and-explainable-algorithm-for-sentiment-analysis-31186d2a8c7e ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer # Load the model model_name = "bekushal/FictoBERT" model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Example input text input_text = "It is a sunny day with a nice wind blowing and I am feeling very happy." # Preprocess the input inputs = tokenizer(input_text, return_tensors="pt", truncation=True, padding=True) # Perform inference outputs = model(**inputs) # Get predicted class probabilities predicted_probabilities = outputs.logits.softmax(dim=-1) # Get predicted class label predicted_label = predicted_probabilities.argmax().item() # Convert predicted label to human-readable format predicted_class = "fiction" if predicted_label == 1 else "non-fiction" # Display results print("Predicted class:", predicted_class) print("Predicted class probabilities [non-fiction, fiction]:", predicted_probabilities)``` --- license: apache-2.0 ---
{}
bekushal/FictoBERT
null
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:05:30+00:00
[]
[]
TAGS #transformers #safetensors #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
We have finetuned BERT base on Brown corpus to predict if a text was written using a non-fiction straight forward style or a fictional descriptive style. Here fiction and non-fiction only refers to the writing style and not to factual correctness of whats written in the text. For more details, please see this blog and the referenced paper: URL --- license: apache-2.0 ---
[]
[ "TAGS\n#transformers #safetensors #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 27 ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
transformers
# Uploaded model - **Developed by:** jurieyel - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/tinyllama-bnb-4bit"}
jurieyel/text2sql-tinyllama-4bit
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:05:49+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: jurieyel - License: apache-2.0 - Finetuned from model : unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: jurieyel\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: jurieyel\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 60, 77 ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: jurieyel\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
jurieyel/text2sql-tinyllama-4bit-tokenizer
null
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:05:59+00:00
[ "1910.09700" ]
[]
TAGS #transformers #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1406 - F1: 0.8628 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.26 | 1.0 | 525 | 0.1569 | 0.8259 | | 0.1256 | 2.0 | 1050 | 0.1514 | 0.8406 | | 0.0786 | 3.0 | 1575 | 0.1406 | 0.8628 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de", "results": []}]}
gopidon/xlm-roberta-base-finetuned-panx-de
null
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:06:22+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-de ================================== This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1406 * F1: 0.8628 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.3.0 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 47, 101, 5, 40 ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
hi000000/insta_llama2-koen
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:06:25+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mi-super-modelo This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6450 - Accuracy: 0.125 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6111 | 0.5 | 5 | 1.6301 | 0.175 | | 1.6035 | 1.0 | 10 | 1.6450 | 0.125 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "mi-super-modelo", "results": []}]}
miguelpr/mi-super-modelo
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:06:40+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
mi-super-modelo =============== This model is a fine-tuned version of bert-base-cased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.6450 * Accuracy: 0.125 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 55, 101, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
Experimental: Tiefighter Expanded to 15.6 B / 48 layers. # D_AU-Tiefighter-Expanded-13B-pass D_AU-Tiefighter-Expanded-13B-pass is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) * [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) ## 🧩 Configuration ```yaml slices: - sources: - model: KoboldAI/LLaMA2-13B-Tiefighter layer_range: [0, 40] - sources: - model: KoboldAI/LLaMA2-13B-Tiefighter layer_range: [32, 40] merge_method: passthrough dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "DavidAU/D_AU-Tiefighter-Expanded-13B-pass" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "KoboldAI/LLaMA2-13B-Tiefighter"], "base_model": ["KoboldAI/LLaMA2-13B-Tiefighter", "KoboldAI/LLaMA2-13B-Tiefighter"]}
DavidAU/D_AU-Tiefighter-Expanded-13B-pass
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "KoboldAI/LLaMA2-13B-Tiefighter", "base_model:KoboldAI/LLaMA2-13B-Tiefighter", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T03:07:53+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #KoboldAI/LLaMA2-13B-Tiefighter #base_model-KoboldAI/LLaMA2-13B-Tiefighter #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Experimental: Tiefighter Expanded to 15.6 B / 48 layers. # D_AU-Tiefighter-Expanded-13B-pass D_AU-Tiefighter-Expanded-13B-pass is a merge of the following models using LazyMergekit: * KoboldAI/LLaMA2-13B-Tiefighter * KoboldAI/LLaMA2-13B-Tiefighter ## Configuration ## Usage
[ "# D_AU-Tiefighter-Expanded-13B-pass\n\nD_AU-Tiefighter-Expanded-13B-pass is a merge of the following models using LazyMergekit:\n* KoboldAI/LLaMA2-13B-Tiefighter\n* KoboldAI/LLaMA2-13B-Tiefighter", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #KoboldAI/LLaMA2-13B-Tiefighter #base_model-KoboldAI/LLaMA2-13B-Tiefighter #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# D_AU-Tiefighter-Expanded-13B-pass\n\nD_AU-Tiefighter-Expanded-13B-pass is a merge of the following models using LazyMergekit:\n* KoboldAI/LLaMA2-13B-Tiefighter\n* KoboldAI/LLaMA2-13B-Tiefighter", "## Configuration", "## Usage" ]
[ 76, 68, 3, 3 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #KoboldAI/LLaMA2-13B-Tiefighter #base_model-KoboldAI/LLaMA2-13B-Tiefighter #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# D_AU-Tiefighter-Expanded-13B-pass\n\nD_AU-Tiefighter-Expanded-13B-pass is a merge of the following models using LazyMergekit:\n* KoboldAI/LLaMA2-13B-Tiefighter\n* KoboldAI/LLaMA2-13B-Tiefighter## Configuration## Usage" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # S1_InstructionGeneratorDelta This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 24 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 173 | 0.1487 | | No log | 2.0 | 346 | 0.1222 | | 0.3431 | 3.0 | 519 | 0.1085 | | 0.3431 | 4.0 | 692 | 0.0998 | | 0.3431 | 5.0 | 865 | 0.0949 | | 0.1133 | 6.0 | 1038 | 0.0911 | | 0.1133 | 7.0 | 1211 | 0.0889 | | 0.1133 | 8.0 | 1384 | 0.0873 | | 0.1007 | 9.0 | 1557 | 0.0855 | | 0.1007 | 10.0 | 1730 | 0.0842 | | 0.1007 | 11.0 | 1903 | 0.0831 | | 0.095 | 12.0 | 2076 | 0.0823 | | 0.095 | 13.0 | 2249 | 0.0817 | | 0.095 | 14.0 | 2422 | 0.0810 | | 0.0913 | 15.0 | 2595 | 0.0807 | | 0.0913 | 16.0 | 2768 | 0.0803 | | 0.0913 | 17.0 | 2941 | 0.0800 | | 0.0892 | 18.0 | 3114 | 0.0797 | | 0.0892 | 19.0 | 3287 | 0.0794 | | 0.0892 | 20.0 | 3460 | 0.0793 | | 0.0879 | 21.0 | 3633 | 0.0791 | | 0.0879 | 22.0 | 3806 | 0.0790 | | 0.0879 | 23.0 | 3979 | 0.0790 | | 0.0868 | 24.0 | 4152 | 0.0789 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilgpt2", "model-index": [{"name": "S1_InstructionGeneratorDelta", "results": []}]}
gjonesQ02/S1_InstructionGeneratorDelta
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilgpt2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T03:08:02+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
S1\_InstructionGeneratorDelta ============================= This model is a fine-tuned version of distilgpt2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.0789 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 24 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 24", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 24", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ 63, 101, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 24### Training results### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-Q8_0-v0.2
null
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:09:35+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: theGhoul21 - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 67, 88 ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisperFinetuneFinal This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5711 - Wer: 22.2920 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 2.8037 | 0.2778 | 10 | 1.8972 | 34.0973 | | 0.8101 | 0.5556 | 20 | 0.6863 | 28.6970 | | 0.5778 | 0.8333 | 30 | 0.5491 | 22.8885 | | 0.4375 | 1.1111 | 40 | 0.4962 | 20.7535 | | 0.3437 | 1.3889 | 50 | 0.4767 | 20.2512 | | 0.3277 | 1.6667 | 60 | 0.4921 | 21.0047 | | 0.3431 | 1.9444 | 70 | 0.4972 | 20.4082 | | 0.1331 | 2.2222 | 80 | 0.5317 | 25.7143 | | 0.1385 | 2.5 | 90 | 0.5308 | 20.4396 | | 0.146 | 2.7778 | 100 | 0.5711 | 22.2920 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.1.dev0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-tiny.en", "model-index": [{"name": "whisperFinetuneFinal", "results": []}]}
shljessie/whisperFinetuneFinal
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny.en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:13:10+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-tiny.en #license-apache-2.0 #endpoints_compatible #region-us
whisperFinetuneFinal ==================== This model is a fine-tuned version of openai/URL on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.5711 * Wer: 22.2920 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.001 * train\_batch\_size: 128 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 100 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.1.dev0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.1.dev0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-tiny.en #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.1.dev0\n* Tokenizers 0.19.1" ]
[ 54, 114, 5, 47 ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-tiny.en #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 100### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.1.dev0\n* Tokenizers 0.19.1" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-base-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window8-256](https://huggingface.co/microsoft/swinv2-base-patch4-window8-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3463 - Accuracy: 0.8463 - Recall: 0.8463 - F1: 0.8464 - Precision: 0.8482 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | No log | 0.9974 | 293 | 0.6222 | 0.7901 | 0.7901 | 0.7737 | 0.7747 | | No log | 1.9983 | 587 | 0.4901 | 0.8063 | 0.8063 | 0.7998 | 0.8066 | | No log | 2.9991 | 881 | 0.4374 | 0.8225 | 0.8225 | 0.8170 | 0.8356 | | No log | 4.0 | 1175 | 0.4262 | 0.8340 | 0.8340 | 0.8270 | 0.8541 | | No log | 4.9974 | 1468 | 0.4079 | 0.8310 | 0.8310 | 0.8290 | 0.8379 | | No log | 5.9983 | 1762 | 0.4117 | 0.8370 | 0.8370 | 0.8361 | 0.8509 | | No log | 6.9991 | 2056 | 0.3807 | 0.8370 | 0.8370 | 0.8361 | 0.8416 | | No log | 8.0 | 2350 | 0.3419 | 0.8595 | 0.8595 | 0.8583 | 0.8609 | | No log | 8.9974 | 2643 | 0.3628 | 0.8438 | 0.8438 | 0.8424 | 0.8448 | | 0.4492 | 9.9745 | 2930 | 0.3638 | 0.8399 | 0.8399 | 0.8394 | 0.8410 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0a0+81ea7a4 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy", "recall", "f1", "precision"], "base_model": "microsoft/swinv2-base-patch4-window8-256", "model-index": [{"name": "swinv2-base-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8463175819497658, "name": "Accuracy"}, {"type": "recall", "value": 0.8463175819497658, "name": "Recall"}, {"type": "f1", "value": 0.8463640211224454, "name": "F1"}, {"type": "precision", "value": 0.8481964005333177, "name": "Precision"}]}]}]}
Kushagra07/swinv2-base-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask
null
[ "transformers", "tensorboard", "safetensors", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-base-patch4-window8-256", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:14:33+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #swinv2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swinv2-base-patch4-window8-256 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
swinv2-base-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask ====================================================================== This model is a fine-tuned version of microsoft/swinv2-base-patch4-window8-256 on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 0.3463 * Accuracy: 0.8463 * Recall: 0.8463 * F1: 0.8464 * Precision: 0.8482 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.0a0+81ea7a4 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0a0+81ea7a4\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #swinv2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swinv2-base-patch4-window8-256 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0a0+81ea7a4\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 79, 142, 5, 48 ]
[ "TAGS\n#transformers #tensorboard #safetensors #swinv2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swinv2-base-patch4-window8-256 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0a0+81ea7a4\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
JunWorks/Quantized_4bit_WhisperSmallOri_FP16
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-05-02T03:18:14+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 38, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #4-bit #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-gguf-16bit-v0.2
null
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:18:54+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: theGhoul21 - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 67, 88 ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Llama-3-KoEn-8B-xtuner-llava-preview 🌋 <!-- Provide a quick summary of what the model is/does. --> Llama-3-KoEn-8B-xtuner-llava-preview 🌋 is Korean based MutliModal based on Llava architecture, merged with [ChatVector](https://arxiv.org/abs/2310.04799) methods leveraging 2 models: 1) [beomi/Llama-3-KoEn-8B-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview) 2) [xtuner/llava-llama-3-8b-transformers](https://huggingface.co/xtuner/llava-llama-3-8b-transformers) ## Model Details ### Model Description - **Developed by:** Junbum Lee (Beomi) - **Model type:** HuggingFace Llava 🌋 - **Language(s) (NLP):** Korean, English - **License:** cc-by-nc-sa-4.0 under Llama3 License - **Merged from model:** [beomi/Llama-3-KoEn-8B-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview) & [xtuner/llava-llama-3-8b-transformers](https://huggingface.co/xtuner/llava-llama-3-8b-transformers) ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> ![Cat walking on frozen Han-River, Seoul](https://cdn-uploads.huggingface.co/production/uploads/5e56829137cb5b49818287ea/NWfoArWI4UPAxpEnolkwT.jpeg) > Two version recommended > > v1. `revision='a38aac3'`: Basic ChatVector, with [25B+ trained KoEn ckpt(rev. d4d25a2)](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview/commit/d4d25a2). > > v1-1. `revision='0224971'`: Basic ChatVector, with [40B+ trained KoEn ckpt(rev. ad39b32)](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview/commit/ad39b32cd4207f37f61f16e79d3f4020c5b744ef). > > v2. `revision='4f04d1e'`: Model diff based merging(ref. https://huggingface.co/blog/maywell/llm-feature-transfer), with [25B+ trained KoEn ckpt(rev. d4d25a2)](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview/commit/d4d25a2). ```python import requests from PIL import Image import torch from transformers import AutoProcessor, LlavaForConditionalGeneration model_id = "beomi/Llama-3-KoEn-8B-xtuner-llava-preview" model = LlavaForConditionalGeneration.from_pretrained( model_id, torch_dtype='auto', device_map='auto', revision='a38aac3', # 'a38aac3' for basic ChatVector, '4f04d1e' for Model diff based merging(ref. https://huggingface.co/blog/maywell/llm-feature-transfer) ) processor = AutoProcessor.from_pretrained(model_id) from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('./llava-llama-3-KoEn-8b-v1_1-transformers') terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] prompt = ("<|start_header_id|>user<|end_header_id|>\n\n<image>\n이 이미지에 대해서 설명해주세요.<|eot_id|>" "<|start_header_id|>assistant<|end_header_id|>\n\n이 이미지에는") image_file = "https://cdn-uploads.huggingface.co/production/uploads/5e56829137cb5b49818287ea/NWfoArWI4UPAxpEnolkwT.jpeg" raw_image = Image.open(requests.get(image_file, stream=True).raw) inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16) output = model.generate(**inputs, max_new_tokens=400, do_sample=True, eos_token_id=terminators,) print(processor.decode(output[0][2:], skip_special_tokens=False)) # --- Example Output [v1, Chat Vector] --- user<|end_header_id|> <image> 이 이미지에 대해서 설명해주세요.<|eot_id|><|start_header_id|>assistant<|end_header_id|> 이 이미지에는 고양이 한 마리가 강물 위를 걸어가는 모습이 보여집니다. 고양이는 강물의 잔물결에 미끄럼을 타고 강 가로를 지나는 데 능숙하게 보입니다. 고양이의 발은 강물로 잘 들어가, 그것을 즐기며 걸어갑니다. 또한 이 이미지도 음성 녹음을 하거나 녹화된 자료로 제작되었으며, 주로 고양이의 모습을 강하게 보여줍니다. 소리 효과도 여러 가지로 추가하여 고양이의 스토리를 다양하게 전달합니다. 강물은 잔물결을 나타내며 강물 위를 걷는 고양이의 모습을 더욱 강렬하게 강조하기 위해 잔물결을 통해 더 디테일한 장면을 보여줍니다.<|eot_id|> # --- Example Output [v1-1, Chat Vector] --- user<|end_header_id|> <image> 이 이미지에 대해서 설명해주세요.<|eot_id|><|start_header_id|>assistant<|end_header_id|> 이 이미지에서는 한 고양이가 서해안에 위치한 바다를 걷고 있는 모습을 볼 수 있습니다. 고양이는 해변에서부터 바다로 걸어들어가는 중이며, 주변에는 잔잔한 파도가 밀려오는 모습을 보여주고 있습니다. 이 고양이는 태어날 때부터 고양이와 강아지와는 다르게 바다를 경험하고, 적응해가고 있습니다. 고양이는 바다를 좋아하고, 이 환경에서 행복을 느끼는 것 같습니다. 이 고양이는 인간이 아닌 자연의 일부로써 이 환경에서 살아가고 있습니다.<|eot_id|> # --- Example Output [v2, Model diff based merging] --- user<|end_header_id|> <image> 이 이미지에 대해서 설명해주세요.<|eot_id|><|start_header_id|>assistant<|end_header_id|> 이 이미지에는 한국어 자막과 함께 고양이가 물에 발을 디디고 걷는 모습이 담겨 있습니다. 고양이는 오른쪽 발을 물에 담그고 걷는 중이며, 한국어 자막은 "고양이는 물을 좋아합니다"라는 문장을 포함하고 있습니다. 이 자막은 고양이가 물을 좋아하는 것을 강조하고 있습니다.<|eot_id|> ```
{"language": ["ko", "en"], "license": "cc-by-nc-sa-4.0", "library_name": "transformers"}
beomi/Llama-3-KoEn-8B-xtuner-llava-preview
null
[ "transformers", "safetensors", "llava", "pretraining", "ko", "en", "arxiv:2310.04799", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:21:54+00:00
[ "2310.04799" ]
[ "ko", "en" ]
TAGS #transformers #safetensors #llava #pretraining #ko #en #arxiv-2310.04799 #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us
# Llama-3-KoEn-8B-xtuner-llava-preview Llama-3-KoEn-8B-xtuner-llava-preview is Korean based MutliModal based on Llava architecture, merged with ChatVector methods leveraging 2 models: 1) beomi/Llama-3-KoEn-8B-preview 2) xtuner/llava-llama-3-8b-transformers ## Model Details ### Model Description - Developed by: Junbum Lee (Beomi) - Model type: HuggingFace Llava - Language(s) (NLP): Korean, English - License: cc-by-nc-sa-4.0 under Llama3 License - Merged from model: beomi/Llama-3-KoEn-8B-preview & xtuner/llava-llama-3-8b-transformers ### Direct Use !Cat walking on frozen Han-River, Seoul > Two version recommended > > v1. 'revision='a38aac3'': Basic ChatVector, with 25B+ trained KoEn ckpt(rev. d4d25a2). > > v1-1. 'revision='0224971'': Basic ChatVector, with 40B+ trained KoEn ckpt(rev. ad39b32). > > v2. 'revision='4f04d1e'': Model diff based merging(ref. URL with 25B+ trained KoEn ckpt(rev. d4d25a2).
[ "# Llama-3-KoEn-8B-xtuner-llava-preview \n\n\n\nLlama-3-KoEn-8B-xtuner-llava-preview is Korean based MutliModal based on Llava architecture, merged with ChatVector methods leveraging 2 models: \n1) beomi/Llama-3-KoEn-8B-preview\n2) xtuner/llava-llama-3-8b-transformers", "## Model Details", "### Model Description\n\n- Developed by: Junbum Lee (Beomi)\n- Model type: HuggingFace Llava \n- Language(s) (NLP): Korean, English\n- License: cc-by-nc-sa-4.0 under Llama3 License\n- Merged from model: beomi/Llama-3-KoEn-8B-preview & xtuner/llava-llama-3-8b-transformers", "### Direct Use\n\n\n\n!Cat walking on frozen Han-River, Seoul\n\n> Two version recommended\n>\n> v1. 'revision='a38aac3'': Basic ChatVector, with 25B+ trained KoEn ckpt(rev. d4d25a2).\n>\n> v1-1. 'revision='0224971'': Basic ChatVector, with 40B+ trained KoEn ckpt(rev. ad39b32).\n> \n> v2. 'revision='4f04d1e'': Model diff based merging(ref. URL with 25B+ trained KoEn ckpt(rev. d4d25a2)." ]
[ "TAGS\n#transformers #safetensors #llava #pretraining #ko #en #arxiv-2310.04799 #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us \n", "# Llama-3-KoEn-8B-xtuner-llava-preview \n\n\n\nLlama-3-KoEn-8B-xtuner-llava-preview is Korean based MutliModal based on Llava architecture, merged with ChatVector methods leveraging 2 models: \n1) beomi/Llama-3-KoEn-8B-preview\n2) xtuner/llava-llama-3-8b-transformers", "## Model Details", "### Model Description\n\n- Developed by: Junbum Lee (Beomi)\n- Model type: HuggingFace Llava \n- Language(s) (NLP): Korean, English\n- License: cc-by-nc-sa-4.0 under Llama3 License\n- Merged from model: beomi/Llama-3-KoEn-8B-preview & xtuner/llava-llama-3-8b-transformers", "### Direct Use\n\n\n\n!Cat walking on frozen Han-River, Seoul\n\n> Two version recommended\n>\n> v1. 'revision='a38aac3'': Basic ChatVector, with 25B+ trained KoEn ckpt(rev. d4d25a2).\n>\n> v1-1. 'revision='0224971'': Basic ChatVector, with 40B+ trained KoEn ckpt(rev. ad39b32).\n> \n> v2. 'revision='4f04d1e'': Model diff based merging(ref. URL with 25B+ trained KoEn ckpt(rev. d4d25a2)." ]
[ 53, 98, 4, 94, 152 ]
[ "TAGS\n#transformers #safetensors #llava #pretraining #ko #en #arxiv-2310.04799 #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us \n# Llama-3-KoEn-8B-xtuner-llava-preview \n\n\n\nLlama-3-KoEn-8B-xtuner-llava-preview is Korean based MutliModal based on Llava architecture, merged with ChatVector methods leveraging 2 models: \n1) beomi/Llama-3-KoEn-8B-preview\n2) xtuner/llava-llama-3-8b-transformers## Model Details### Model Description\n\n- Developed by: Junbum Lee (Beomi)\n- Model type: HuggingFace Llava \n- Language(s) (NLP): Korean, English\n- License: cc-by-nc-sa-4.0 under Llama3 License\n- Merged from model: beomi/Llama-3-KoEn-8B-preview & xtuner/llava-llama-3-8b-transformers### Direct Use\n\n\n\n!Cat walking on frozen Han-River, Seoul\n\n> Two version recommended\n>\n> v1. 'revision='a38aac3'': Basic ChatVector, with 25B+ trained KoEn ckpt(rev. d4d25a2).\n>\n> v1-1. 'revision='0224971'': Basic ChatVector, with 40B+ trained KoEn ckpt(rev. ad39b32).\n> \n> v2. 'revision='4f04d1e'': Model diff based merging(ref. URL with 25B+ trained KoEn ckpt(rev. d4d25a2)." ]
text-generation
transformers
# Llama-3-OpenBioMed-8B-slerp-v0.2 Llama-3-OpenBioMed-8B-slerp-v0.2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) * [johnsnowlabs/JSL-MedLlama-3-8B-v2.0](https://huggingface.co/johnsnowlabs/JSL-MedLlama-3-8B-v2.0) ## 🧩 Configuration ```yaml slices: - sources: - model: aaditya/Llama3-OpenBioLLM-8B layer_range: [0, 32] - model: johnsnowlabs/JSL-MedLlama-3-8B-v2.0 layer_range: [0, 32] merge_method: slerp base_model: johnsnowlabs/JSL-MedLlama-3-8B-v2.0 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "abhinand/Llama-3-OpenBioMed-8B-slerp-v0.2" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "aaditya/Llama3-OpenBioLLM-8B", "johnsnowlabs/JSL-MedLlama-3-8B-v2.0"], "base_model": ["aaditya/Llama3-OpenBioLLM-8B", "johnsnowlabs/JSL-MedLlama-3-8B-v2.0"]}
abhinand/Llama-3-OpenBioMed-8B-slerp-v0.2
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "aaditya/Llama3-OpenBioLLM-8B", "johnsnowlabs/JSL-MedLlama-3-8B-v2.0", "base_model:aaditya/Llama3-OpenBioLLM-8B", "base_model:johnsnowlabs/JSL-MedLlama-3-8B-v2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T03:22:03+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #aaditya/Llama3-OpenBioLLM-8B #johnsnowlabs/JSL-MedLlama-3-8B-v2.0 #base_model-aaditya/Llama3-OpenBioLLM-8B #base_model-johnsnowlabs/JSL-MedLlama-3-8B-v2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Llama-3-OpenBioMed-8B-slerp-v0.2 Llama-3-OpenBioMed-8B-slerp-v0.2 is a merge of the following models using LazyMergekit: * aaditya/Llama3-OpenBioLLM-8B * johnsnowlabs/JSL-MedLlama-3-8B-v2.0 ## Configuration ## Usage
[ "# Llama-3-OpenBioMed-8B-slerp-v0.2\n\nLlama-3-OpenBioMed-8B-slerp-v0.2 is a merge of the following models using LazyMergekit:\n* aaditya/Llama3-OpenBioLLM-8B\n* johnsnowlabs/JSL-MedLlama-3-8B-v2.0", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #aaditya/Llama3-OpenBioLLM-8B #johnsnowlabs/JSL-MedLlama-3-8B-v2.0 #base_model-aaditya/Llama3-OpenBioLLM-8B #base_model-johnsnowlabs/JSL-MedLlama-3-8B-v2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Llama-3-OpenBioMed-8B-slerp-v0.2\n\nLlama-3-OpenBioMed-8B-slerp-v0.2 is a merge of the following models using LazyMergekit:\n* aaditya/Llama3-OpenBioLLM-8B\n* johnsnowlabs/JSL-MedLlama-3-8B-v2.0", "## Configuration", "## Usage" ]
[ 128, 92, 3, 3 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #aaditya/Llama3-OpenBioLLM-8B #johnsnowlabs/JSL-MedLlama-3-8B-v2.0 #base_model-aaditya/Llama3-OpenBioLLM-8B #base_model-johnsnowlabs/JSL-MedLlama-3-8B-v2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Llama-3-OpenBioMed-8B-slerp-v0.2\n\nLlama-3-OpenBioMed-8B-slerp-v0.2 is a merge of the following models using LazyMergekit:\n* aaditya/Llama3-OpenBioLLM-8B\n* johnsnowlabs/JSL-MedLlama-3-8B-v2.0## Configuration## Usage" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
TinyPixel/llama-3-adapter3
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:23:03+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-generation
null
## Llamacpp imatrix Quantizations of Hermes-2-Pro-Llama-3-8B Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization. Original model: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Hermes-2-Pro-Llama-3-8B-Q8_0.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [Hermes-2-Pro-Llama-3-8B-Q6_K.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [Hermes-2-Pro-Llama-3-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Hermes-2-Pro-Llama-3-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [Hermes-2-Pro-Llama-3-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Hermes-2-Pro-Llama-3-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Hermes-2-Pro-Llama-3-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [Hermes-2-Pro-Llama-3-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [Hermes-2-Pro-Llama-3-8B-IQ3_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Hermes-2-Pro-Llama-3-8B-IQ3_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Hermes-2-Pro-Llama-3-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [Hermes-2-Pro-Llama-3-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Hermes-2-Pro-Llama-3-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Hermes-2-Pro-Llama-3-8B-Q2_K.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [Hermes-2-Pro-Llama-3-8B-IQ2_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Hermes-2-Pro-Llama-3-8B-IQ2_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [Hermes-2-Pro-Llama-3-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | | [Hermes-2-Pro-Llama-3-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. | | [Hermes-2-Pro-Llama-3-8B-IQ1_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. | | [Hermes-2-Pro-Llama-3-8B-IQ1_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Llama-3-8B-GGUF/blob/main/Hermes-2-Pro-Llama-3-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "NousResearch/Meta-Llama-3-8B", "widget": [{"example_title": "Hermes 2 Pro", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "quantized_by": "bartowski", "pipeline_tag": "text-generation", "model-index": [{"name": "Hermes-2-Pro-Llama-3-8B", "results": []}]}
bartowski/Hermes-2-Pro-Llama-3-8B-GGUF
null
[ "gguf", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "text-generation", "en", "dataset:teknium/OpenHermes-2.5", "base_model:NousResearch/Meta-Llama-3-8B", "license:apache-2.0", "region:us" ]
null
2024-05-02T03:23:42+00:00
[]
[ "en" ]
TAGS #gguf #Llama-3 #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #function calling #json mode #axolotl #text-generation #en #dataset-teknium/OpenHermes-2.5 #base_model-NousResearch/Meta-Llama-3-8B #license-apache-2.0 #region-us
Llamacpp imatrix Quantizations of Hermes-2-Pro-Llama-3-8B --------------------------------------------------------- Using <a href="URL release <a href="URL for quantization. Original model: URL All quants made using imatrix option with dataset provided by Kalomaze here Prompt format ------------- Download a file (not the whole branch) from below: -------------------------------------------------- Which file should I choose? --------------------------- A great write up with charts showing various performances is provided by Artefact2 here The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX\_K\_X', like Q5\_K\_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: URL feature matrix But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX\_X, like IQ3\_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: URL
[]
[ "TAGS\n#gguf #Llama-3 #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #function calling #json mode #axolotl #text-generation #en #dataset-teknium/OpenHermes-2.5 #base_model-NousResearch/Meta-Llama-3-8B #license-apache-2.0 #region-us \n" ]
[ 103 ]
[ "TAGS\n#gguf #Llama-3 #instruct #finetune #chatml #DPO #RLHF #gpt4 #synthetic data #distillation #function calling #json mode #axolotl #text-generation #en #dataset-teknium/OpenHermes-2.5 #base_model-NousResearch/Meta-Llama-3-8B #license-apache-2.0 #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ryanyeo/kirnect-Llama-3-Open-Ko-8B-Instruct-preview-0501-5150step-8batch_5epoch
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:24:02+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 2.4768 | | 2.7735 | 2.0 | 500 | 1.9128 | | 2.7735 | 3.0 | 750 | 1.8364 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "my_awesome_qa_model", "results": []}]}
novalentino94/my_awesome_qa_model
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:33:43+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
my\_awesome\_qa\_model ====================== This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.8364 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 54, 101, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - yuffish/plush-1.5 This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks object using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "inference": true, "base_model": "runwayml/stable-diffusion-v1-5", "instance_prompt": "a photo of sks object"}
yuffish/plush-1.5
null
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-05-02T03:36:25+00:00
[]
[]
TAGS #diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# DreamBooth - yuffish/plush-1.5 This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks object using DreamBooth. You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# DreamBooth - yuffish/plush-1.5\n\nThis is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks object using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# DreamBooth - yuffish/plush-1.5\n\nThis is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks object using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ 82, 75, 6, 7, 23, 17 ]
[ "TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n# DreamBooth - yuffish/plush-1.5\n\nThis is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks object using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.## Intended uses & limitations#### How to use#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]## Training details\n\n[TODO: describe the data used to train the model]" ]
text-generation
transformers
> **CAUTION:** This model scores high on the Leaderboard but outputs gibberish! # Llama-3-OpenBioMed-8B-slerp-v0.3 Llama-3-OpenBioMed-8B-slerp-v0.3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Jayant9928/orpo_med_v3](https://huggingface.co/Jayant9928/orpo_med_v3) * [skumar9/Llama-medx_v3](https://huggingface.co/skumar9/Llama-medx_v3) ## 🧩 Configuration ```yaml slices: - sources: - model: Jayant9928/orpo_med_v3 layer_range: [0, 32] - model: skumar9/Llama-medx_v3 layer_range: [0, 32] merge_method: slerp base_model: Jayant9928/orpo_med_v3 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "abhinand/Llama-3-OpenBioMed-8B-slerp-v0.3" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "llama3", "tags": ["merge", "mergekit", "lazymergekit", "Jayant9928/orpo_med_v3", "skumar9/Llama-medx_v3"], "base_model": ["Jayant9928/orpo_med_v3", "skumar9/Llama-medx_v3"]}
abhinand/Llama-3-OpenBioMed-8B-slerp-v0.3
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "Jayant9928/orpo_med_v3", "skumar9/Llama-medx_v3", "conversational", "base_model:Jayant9928/orpo_med_v3", "base_model:skumar9/Llama-medx_v3", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T03:36:58+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Jayant9928/orpo_med_v3 #skumar9/Llama-medx_v3 #conversational #base_model-Jayant9928/orpo_med_v3 #base_model-skumar9/Llama-medx_v3 #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
> CAUTION: This model scores high on the Leaderboard but outputs gibberish! # Llama-3-OpenBioMed-8B-slerp-v0.3 Llama-3-OpenBioMed-8B-slerp-v0.3 is a merge of the following models using LazyMergekit: * Jayant9928/orpo_med_v3 * skumar9/Llama-medx_v3 ## Configuration ## Usage
[ "# Llama-3-OpenBioMed-8B-slerp-v0.3\n\nLlama-3-OpenBioMed-8B-slerp-v0.3 is a merge of the following models using LazyMergekit:\n* Jayant9928/orpo_med_v3\n* skumar9/Llama-medx_v3", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Jayant9928/orpo_med_v3 #skumar9/Llama-medx_v3 #conversational #base_model-Jayant9928/orpo_med_v3 #base_model-skumar9/Llama-medx_v3 #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Llama-3-OpenBioMed-8B-slerp-v0.3\n\nLlama-3-OpenBioMed-8B-slerp-v0.3 is a merge of the following models using LazyMergekit:\n* Jayant9928/orpo_med_v3\n* skumar9/Llama-medx_v3", "## Configuration", "## Usage" ]
[ 117, 82, 3, 3 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Jayant9928/orpo_med_v3 #skumar9/Llama-medx_v3 #conversational #base_model-Jayant9928/orpo_med_v3 #base_model-skumar9/Llama-medx_v3 #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Llama-3-OpenBioMed-8B-slerp-v0.3\n\nLlama-3-OpenBioMed-8B-slerp-v0.3 is a merge of the following models using LazyMergekit:\n* Jayant9928/orpo_med_v3\n* skumar9/Llama-medx_v3## Configuration## Usage" ]
null
peft
**Note**: This model card has been generated automatically according to the information the Trainer had access to. Visit the [model card](https://ritvik19.github.io/zephyr-mini/) to see the full description. # zephyr-danube-sft-qlora This model is a fine-tuned version of [h2oai/h2o-danube-1.8b-base](https://huggingface.co/h2oai/h2o-danube-1.8b-base) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: 1.0893 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 128 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.0883 | 0.9998 | 1140 | 1.0893 | ### Framework versions - PEFT 0.7.1 - Transformers 4.40.1 - Pytorch 2.1.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrachat_200k"], "base_model": "h2oai/h2o-danube-1.8b-base", "model-index": [{"name": "zephyr-danube-sft-qlora", "results": []}]}
Ritvik19/zephyr-danube-sft-qlora
null
[ "peft", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:h2oai/h2o-danube-1.8b-base", "license:apache-2.0", "region:us" ]
null
2024-05-02T03:37:12+00:00
[]
[]
TAGS #peft #safetensors #mistral #alignment-handbook #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-h2oai/h2o-danube-1.8b-base #license-apache-2.0 #region-us
Note: This model card has been generated automatically according to the information the Trainer had access to. Visit the model card to see the full description. zephyr-danube-sft-qlora ======================= This model is a fine-tuned version of h2oai/h2o-danube-1.8b-base on the HuggingFaceH4/ultrachat\_200k dataset. It achieves the following results on the evaluation set: * Loss: 1.0893 ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * distributed\_type: multi-GPU * gradient\_accumulation\_steps: 128 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.7.1 * Transformers 4.40.1 * Pytorch 2.1.2+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 128\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #safetensors #mistral #alignment-handbook #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-h2oai/h2o-danube-1.8b-base #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 128\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 70, 154, 5, 52 ]
[ "TAGS\n#peft #safetensors #mistral #alignment-handbook #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-h2oai/h2o-danube-1.8b-base #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 128\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1### Training results### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-gguf-q4_k_m-v0.2
null
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T03:37:43+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: theGhoul21 - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 67, 88 ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ayon128/CM_BN_EN_0
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-02T03:38:01+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 46, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]