pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3773 - Accuracy: 0.6719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6515 | 1.0 | 5702 | 0.6429 | 0.73 | | 0.4767 | 2.0 | 11404 | 0.6908 | 0.7275 | | 0.2759 | 3.0 | 17106 | 1.1133 | 0.7265 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "test_trainer", "results": []}]}
melabelen/mela-tw-sentiment-model
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-25T15:36:37+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
test\_trainer ============= This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.3773 * Accuracy: 0.6719 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
null
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->all ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{}
murawhale/ATEEZ_Jung_wooyoung
null
[ "arxiv:1910.09700", "region:us" ]
null
2024-04-25T15:37:04+00:00
[ "1910.09700" ]
[]
TAGS #arxiv-1910.09700 #region-us
# Model Card for Model ID This modelcard aims to be a base template for new models. It has been generated using this raw template. ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses all ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses\n\nall", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#arxiv-1910.09700 #region-us \n", "# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses\n\nall", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
TinyPixel/try-2
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T15:37:27+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # saiga_double_lora This model is a fine-tuned version of [TheBloke/Llama-2-7B-fp16](https://huggingface.co/TheBloke/Llama-2-7B-fp16) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0512 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 10 - total_train_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1113 | 1.43 | 10 | 2.0512 | | 2.1068 | 2.86 | 20 | 2.0512 | ### Framework versions - PEFT 0.10.0 - Transformers 4.36.2 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Llama-2-7B-fp16", "model-index": [{"name": "saiga_double_lora", "results": []}]}
marcus2000/saiga_double_lora
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:TheBloke/Llama-2-7B-fp16", "region:us" ]
null
2024-04-25T15:37:40+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-TheBloke/Llama-2-7B-fp16 #region-us
saiga\_double\_lora =================== This model is a fine-tuned version of TheBloke/Llama-2-7B-fp16 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.0512 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-06 * train\_batch\_size: 2 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 10 * total\_train\_batch\_size: 20 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 20 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.36.2 * Pytorch 2.2.2+cu121 * Datasets 2.19.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 10\n* total\\_train\\_batch\\_size: 20\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 20", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.36.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-TheBloke/Llama-2-7B-fp16 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 10\n* total\\_train\\_batch\\_size: 20\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 20", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.36.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
sentence-similarity
sentence-transformers
# SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained on the [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) dataset. It maps sentences & paragraphs to a 300-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 1000000 tokens - **Output Dimensionality:** 300 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(400001, 300) ) (1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Dense({'in_features': 300, 'out_features': 300, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) (3): Dense({'in_features': 300, 'out_features': 300, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("tomaarsen/glove-mean-pooling-sts") # Run inference sentences = [ 'A baby is laughing.', 'The baby laughed in his car seat.', 'A person is combing a cat hair.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 300] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.7684 | | **spearman_cosine** | **0.7633** | | pearson_manhattan | 0.7167 | | spearman_manhattan | 0.7284 | | pearson_euclidean | 0.7177 | | spearman_euclidean | 0.7297 | | pearson_dot | 0.5616 | | spearman_dot | 0.6116 | | pearson_max | 0.7684 | | spearman_max | 0.7633 | #### Semantic Similarity * Dataset: `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6783 | | **spearman_cosine** | **0.6549** | | pearson_manhattan | 0.6065 | | spearman_manhattan | 0.6169 | | pearson_euclidean | 0.6073 | | spearman_euclidean | 0.6179 | | pearson_dot | 0.4501 | | spearman_dot | 0.4723 | | pearson_max | 0.6783 | | spearman_max | 0.6549 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### sentence-transformers/stsb * Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [d999f12](https://huggingface.co/datasets/sentence-transformers/stsb/tree/d999f12281623b0925506817d9bd85e88289218a) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 1 tokens</li><li>mean: 3.38 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 1 tokens</li><li>mean: 3.39 tokens</li><li>max: 10 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------| | <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> | | <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> | | <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Evaluation Dataset #### sentence-transformers/stsb * Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [d999f12](https://huggingface.co/datasets/sentence-transformers/stsb/tree/d999f12281623b0925506817d9bd85e88289218a) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 1 tokens</li><li>mean: 5.17 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 1 tokens</li><li>mean: 5.08 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------|:------------------------------------------------------|:------------------| | <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> | | <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> | | <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: False - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: None - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine | sts-test_spearman_cosine | |:------:|:----:|:-------------:|:------:|:-----------------------:|:------------------------:| | 0.5556 | 100 | 0.0908 | 0.0577 | 0.7633 | - | | 1.0 | 180 | - | - | - | 0.6549 | ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.000 kWh - **Carbon Emitted**: 0.000 kg of CO2 - **Hours Used**: 0.002 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 3.0.0.dev0 - Transformers: 4.41.0.dev0 - PyTorch: 2.3.0+cu121 - Accelerate: 0.26.1 - Datasets: 2.18.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"language": ["en"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "loss:CosineSimilarityLoss"], "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "widget": [{"source_sentence": "Women are running.", "sentences": ["Women are running.", "The cougar is chasing the bear.", "NATO soldier killed in Afghan attack"]}, {"source_sentence": "A woman is reading.", "sentences": ["A woman is writing something.", "A person is drawing a picture.", "A dog laying in the snow."]}, {"source_sentence": "A plane in the sky.", "sentences": ["Two airplanes in the sky.", "A man is playing an instrument.", "Bangladesh executes opposition leader"]}, {"source_sentence": "A man jumping rope", "sentences": ["A man is climbing a rope.", "The girl is playing the guitar.", "A chef prepared a meal."]}, {"source_sentence": "A baby is laughing.", "sentences": ["The baby laughed in his car seat.", "A person is combing a cat hair.", "A man is riding a horse in the desert."]}], "pipeline_tag": "sentence-similarity", "co2_eq_emissions": {"emissions": 0.04787408159843385, "energy_consumed": 0.00012316397033828962, "source": "codecarbon", "training_type": "fine-tuning", "on_cloud": false, "cpu_model": "13th Gen Intel(R) Core(TM) i7-13700K", "ram_total_size": 31.777088165283203, "hours_used": 0.002, "hardware_used": "1 x NVIDIA GeForce RTX 3090"}, "model-index": [{"name": "SentenceTransformer", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts dev", "type": "sts-dev"}, "metrics": [{"type": "pearson_cosine", "value": 0.7683803418925228, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.7632727671822109, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.7167343000545916, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.7284225373129679, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.7177127625426643, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.729676171689153, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.561565806742925, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.6116263753232491, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.7683803418925228, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.7632727671822109, "name": "Spearman Max"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts test", "type": "sts-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.6783055201030597, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.6549170846046467, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.6064971288495867, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.6169187673598634, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.6073075425801093, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.6178537671183167, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.45009881124802237, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.47227603379856636, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.6783055201030597, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.6549170846046467, "name": "Spearman Max"}]}]}]}
tomaarsen/glove-mean-pooling-sts
null
[ "sentence-transformers", "sentence-similarity", "feature-extraction", "loss:CosineSimilarityLoss", "en", "arxiv:1908.10084", "model-index", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2024-04-25T15:39:01+00:00
[ "1908.10084" ]
[ "en" ]
TAGS #sentence-transformers #sentence-similarity #feature-extraction #loss-CosineSimilarityLoss #en #arxiv-1908.10084 #model-index #co2_eq_emissions #endpoints_compatible #region-us
SentenceTransformer =================== This is a sentence-transformers model trained on the sentence-transformers/stsb dataset. It maps sentences & paragraphs to a 300-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. Model Details ------------- ### Model Description * Model Type: Sentence Transformer * Maximum Sequence Length: 1000000 tokens * Output Dimensionality: 300 tokens * Similarity Function: Cosine Similarity * Training Dataset: + sentence-transformers/stsb * Language: en ### Model Sources * Documentation: Sentence Transformers Documentation * Repository: Sentence Transformers on GitHub * Hugging Face: Sentence Transformers on Hugging Face ### Full Model Architecture Usage ----- ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: Then you can load this model and run inference. Evaluation ---------- ### Metrics #### Semantic Similarity * Dataset: 'sts-dev' * Evaluated with `EmbeddingSimilarityEvaluator` #### Semantic Similarity * Dataset: 'sts-test' * Evaluated with `EmbeddingSimilarityEvaluator` Training Details ---------------- ### Training Dataset #### sentence-transformers/stsb * Dataset: sentence-transformers/stsb at d999f12 * Size: 5,749 training samples * Columns: `sentence1`, `sentence2`, and `score` * Approximate statistics based on the first 1000 samples: * Samples: * Loss: `CosineSimilarityLoss` with these parameters: ### Evaluation Dataset #### sentence-transformers/stsb * Dataset: sentence-transformers/stsb at d999f12 * Size: 1,500 evaluation samples * Columns: `sentence1`, `sentence2`, and `score` * Approximate statistics based on the first 1000 samples: * Samples: * Loss: `CosineSimilarityLoss` with these parameters: ### Training Hyperparameters #### Non-Default Hyperparameters * 'eval\_strategy': steps * 'per\_device\_train\_batch\_size': 32 * 'per\_device\_eval\_batch\_size': 32 * 'num\_train\_epochs': 1 * 'warmup\_ratio': 0.1 * 'fp16': True #### All Hyperparameters Click to expand * 'overwrite\_output\_dir': False * 'do\_predict': False * 'eval\_strategy': steps * 'prediction\_loss\_only': False * 'per\_device\_train\_batch\_size': 32 * 'per\_device\_eval\_batch\_size': 32 * 'per\_gpu\_train\_batch\_size': None * 'per\_gpu\_eval\_batch\_size': None * 'gradient\_accumulation\_steps': 1 * 'eval\_accumulation\_steps': None * 'learning\_rate': 5e-05 * 'weight\_decay': 0.0 * 'adam\_beta1': 0.9 * 'adam\_beta2': 0.999 * 'adam\_epsilon': 1e-08 * 'max\_grad\_norm': 1.0 * 'num\_train\_epochs': 1 * 'max\_steps': -1 * 'lr\_scheduler\_type': linear * 'lr\_scheduler\_kwargs': {} * 'warmup\_ratio': 0.1 * 'warmup\_steps': 0 * 'log\_level': passive * 'log\_level\_replica': warning * 'log\_on\_each\_node': True * 'logging\_nan\_inf\_filter': True * 'save\_safetensors': True * 'save\_on\_each\_node': False * 'save\_only\_model': False * 'no\_cuda': False * 'use\_cpu': False * 'use\_mps\_device': False * 'seed': 42 * 'data\_seed': None * 'jit\_mode\_eval': False * 'use\_ipex': False * 'bf16': False * 'fp16': True * 'fp16\_opt\_level': O1 * 'half\_precision\_backend': auto * 'bf16\_full\_eval': False * 'fp16\_full\_eval': False * 'tf32': None * 'local\_rank': 0 * 'ddp\_backend': None * 'tpu\_num\_cores': None * 'tpu\_metrics\_debug': False * 'debug': [] * 'dataloader\_drop\_last': False * 'dataloader\_num\_workers': 0 * 'dataloader\_prefetch\_factor': None * 'past\_index': -1 * 'disable\_tqdm': False * 'remove\_unused\_columns': True * 'label\_names': None * 'load\_best\_model\_at\_end': False * 'ignore\_data\_skip': False * 'fsdp': [] * 'fsdp\_min\_num\_params': 0 * 'fsdp\_config': {'min\_num\_params': 0, 'xla': False, 'xla\_fsdp\_v2': False, 'xla\_fsdp\_grad\_ckpt': False} * 'fsdp\_transformer\_layer\_cls\_to\_wrap': None * 'accelerator\_config': {'split\_batches': False, 'dispatch\_batches': None, 'even\_batches': True, 'use\_seedable\_sampler': True, 'non\_blocking': False, 'gradient\_accumulation\_kwargs': None} * 'deepspeed': None * 'label\_smoothing\_factor': 0.0 * 'optim': adamw\_torch * 'optim\_args': None * 'adafactor': False * 'group\_by\_length': False * 'length\_column\_name': length * 'ddp\_find\_unused\_parameters': None * 'ddp\_bucket\_cap\_mb': None * 'ddp\_broadcast\_buffers': None * 'dataloader\_pin\_memory': True * 'dataloader\_persistent\_workers': False * 'skip\_memory\_metrics': True * 'use\_legacy\_prediction\_loop': False * 'push\_to\_hub': False * 'resume\_from\_checkpoint': None * 'hub\_model\_id': None * 'hub\_strategy': every\_save * 'hub\_private\_repo': False * 'hub\_always\_push': False * 'gradient\_checkpointing': False * 'gradient\_checkpointing\_kwargs': None * 'include\_inputs\_for\_metrics': False * 'eval\_do\_concat\_batches': True * 'fp16\_backend': auto * 'push\_to\_hub\_model\_id': None * 'push\_to\_hub\_organization': None * 'mp\_parameters': * 'auto\_find\_batch\_size': False * 'full\_determinism': False * 'torchdynamo': None * 'ray\_scope': last * 'ddp\_timeout': 1800 * 'torch\_compile': False * 'torch\_compile\_backend': None * 'torch\_compile\_mode': None * 'dispatch\_batches': None * 'split\_batches': None * 'include\_tokens\_per\_second': False * 'include\_num\_input\_tokens\_seen': False * 'neftune\_noise\_alpha': None * 'optim\_target\_modules': None * 'batch\_sampler': batch\_sampler * 'multi\_dataset\_batch\_sampler': proportional ### Training Logs ### Environmental Impact Carbon emissions were measured using CodeCarbon. * Energy Consumed: 0.000 kWh * Carbon Emitted: 0.000 kg of CO2 * Hours Used: 0.002 hours ### Training Hardware * On Cloud: No * GPU Model: 1 x NVIDIA GeForce RTX 3090 * CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K * RAM Size: 31.78 GB ### Framework Versions * Python: 3.11.6 * Sentence Transformers: 3.0.0.dev0 * Transformers: 4.41.0.dev0 * PyTorch: 2.3.0+cu121 * Accelerate: 0.26.1 * Datasets: 2.18.0 * Tokenizers: 0.19.1 ### BibTeX #### Sentence Transformers
[ "### Model Description\n\n\n* Model Type: Sentence Transformer\n* Maximum Sequence Length: 1000000 tokens\n* Output Dimensionality: 300 tokens\n* Similarity Function: Cosine Similarity\n* Training Dataset:\n\n\n\t+ sentence-transformers/stsb\n* Language: en", "### Model Sources\n\n\n* Documentation: Sentence Transformers Documentation\n* Repository: Sentence Transformers on GitHub\n* Hugging Face: Sentence Transformers on Hugging Face", "### Full Model Architecture\n\n\nUsage\n-----", "### Direct Usage (Sentence Transformers)\n\n\nFirst install the Sentence Transformers library:\n\n\nThen you can load this model and run inference.\n\n\nEvaluation\n----------", "### Metrics", "#### Semantic Similarity\n\n\n* Dataset: 'sts-dev'\n* Evaluated with `EmbeddingSimilarityEvaluator`", "#### Semantic Similarity\n\n\n* Dataset: 'sts-test'\n* Evaluated with `EmbeddingSimilarityEvaluator`\n\n\n\nTraining Details\n----------------", "### Training Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 5,749 training samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Evaluation Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 1,500 evaluation samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Training Hyperparameters", "#### Non-Default Hyperparameters\n\n\n* 'eval\\_strategy': steps\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'num\\_train\\_epochs': 1\n* 'warmup\\_ratio': 0.1\n* 'fp16': True", "#### All Hyperparameters\n\n\nClick to expand\n* 'overwrite\\_output\\_dir': False\n* 'do\\_predict': False\n* 'eval\\_strategy': steps\n* 'prediction\\_loss\\_only': False\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'per\\_gpu\\_train\\_batch\\_size': None\n* 'per\\_gpu\\_eval\\_batch\\_size': None\n* 'gradient\\_accumulation\\_steps': 1\n* 'eval\\_accumulation\\_steps': None\n* 'learning\\_rate': 5e-05\n* 'weight\\_decay': 0.0\n* 'adam\\_beta1': 0.9\n* 'adam\\_beta2': 0.999\n* 'adam\\_epsilon': 1e-08\n* 'max\\_grad\\_norm': 1.0\n* 'num\\_train\\_epochs': 1\n* 'max\\_steps': -1\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_kwargs': {}\n* 'warmup\\_ratio': 0.1\n* 'warmup\\_steps': 0\n* 'log\\_level': passive\n* 'log\\_level\\_replica': warning\n* 'log\\_on\\_each\\_node': True\n* 'logging\\_nan\\_inf\\_filter': True\n* 'save\\_safetensors': True\n* 'save\\_on\\_each\\_node': False\n* 'save\\_only\\_model': False\n* 'no\\_cuda': False\n* 'use\\_cpu': False\n* 'use\\_mps\\_device': False\n* 'seed': 42\n* 'data\\_seed': None\n* 'jit\\_mode\\_eval': False\n* 'use\\_ipex': False\n* 'bf16': False\n* 'fp16': True\n* 'fp16\\_opt\\_level': O1\n* 'half\\_precision\\_backend': auto\n* 'bf16\\_full\\_eval': False\n* 'fp16\\_full\\_eval': False\n* 'tf32': None\n* 'local\\_rank': 0\n* 'ddp\\_backend': None\n* 'tpu\\_num\\_cores': None\n* 'tpu\\_metrics\\_debug': False\n* 'debug': []\n* 'dataloader\\_drop\\_last': False\n* 'dataloader\\_num\\_workers': 0\n* 'dataloader\\_prefetch\\_factor': None\n* 'past\\_index': -1\n* 'disable\\_tqdm': False\n* 'remove\\_unused\\_columns': True\n* 'label\\_names': None\n* 'load\\_best\\_model\\_at\\_end': False\n* 'ignore\\_data\\_skip': False\n* 'fsdp': []\n* 'fsdp\\_min\\_num\\_params': 0\n* 'fsdp\\_config': {'min\\_num\\_params': 0, 'xla': False, 'xla\\_fsdp\\_v2': False, 'xla\\_fsdp\\_grad\\_ckpt': False}\n* 'fsdp\\_transformer\\_layer\\_cls\\_to\\_wrap': None\n* 'accelerator\\_config': {'split\\_batches': False, 'dispatch\\_batches': None, 'even\\_batches': True, 'use\\_seedable\\_sampler': True, 'non\\_blocking': False, 'gradient\\_accumulation\\_kwargs': None}\n* 'deepspeed': None\n* 'label\\_smoothing\\_factor': 0.0\n* 'optim': adamw\\_torch\n* 'optim\\_args': None\n* 'adafactor': False\n* 'group\\_by\\_length': False\n* 'length\\_column\\_name': length\n* 'ddp\\_find\\_unused\\_parameters': None\n* 'ddp\\_bucket\\_cap\\_mb': None\n* 'ddp\\_broadcast\\_buffers': None\n* 'dataloader\\_pin\\_memory': True\n* 'dataloader\\_persistent\\_workers': False\n* 'skip\\_memory\\_metrics': True\n* 'use\\_legacy\\_prediction\\_loop': False\n* 'push\\_to\\_hub': False\n* 'resume\\_from\\_checkpoint': None\n* 'hub\\_model\\_id': None\n* 'hub\\_strategy': every\\_save\n* 'hub\\_private\\_repo': False\n* 'hub\\_always\\_push': False\n* 'gradient\\_checkpointing': False\n* 'gradient\\_checkpointing\\_kwargs': None\n* 'include\\_inputs\\_for\\_metrics': False\n* 'eval\\_do\\_concat\\_batches': True\n* 'fp16\\_backend': auto\n* 'push\\_to\\_hub\\_model\\_id': None\n* 'push\\_to\\_hub\\_organization': None\n* 'mp\\_parameters':\n* 'auto\\_find\\_batch\\_size': False\n* 'full\\_determinism': False\n* 'torchdynamo': None\n* 'ray\\_scope': last\n* 'ddp\\_timeout': 1800\n* 'torch\\_compile': False\n* 'torch\\_compile\\_backend': None\n* 'torch\\_compile\\_mode': None\n* 'dispatch\\_batches': None\n* 'split\\_batches': None\n* 'include\\_tokens\\_per\\_second': False\n* 'include\\_num\\_input\\_tokens\\_seen': False\n* 'neftune\\_noise\\_alpha': None\n* 'optim\\_target\\_modules': None\n* 'batch\\_sampler': batch\\_sampler\n* 'multi\\_dataset\\_batch\\_sampler': proportional", "### Training Logs", "### Environmental Impact\n\n\nCarbon emissions were measured using CodeCarbon.\n\n\n* Energy Consumed: 0.000 kWh\n* Carbon Emitted: 0.000 kg of CO2\n* Hours Used: 0.002 hours", "### Training Hardware\n\n\n* On Cloud: No\n* GPU Model: 1 x NVIDIA GeForce RTX 3090\n* CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K\n* RAM Size: 31.78 GB", "### Framework Versions\n\n\n* Python: 3.11.6\n* Sentence Transformers: 3.0.0.dev0\n* Transformers: 4.41.0.dev0\n* PyTorch: 2.3.0+cu121\n* Accelerate: 0.26.1\n* Datasets: 2.18.0\n* Tokenizers: 0.19.1", "### BibTeX", "#### Sentence Transformers" ]
[ "TAGS\n#sentence-transformers #sentence-similarity #feature-extraction #loss-CosineSimilarityLoss #en #arxiv-1908.10084 #model-index #co2_eq_emissions #endpoints_compatible #region-us \n", "### Model Description\n\n\n* Model Type: Sentence Transformer\n* Maximum Sequence Length: 1000000 tokens\n* Output Dimensionality: 300 tokens\n* Similarity Function: Cosine Similarity\n* Training Dataset:\n\n\n\t+ sentence-transformers/stsb\n* Language: en", "### Model Sources\n\n\n* Documentation: Sentence Transformers Documentation\n* Repository: Sentence Transformers on GitHub\n* Hugging Face: Sentence Transformers on Hugging Face", "### Full Model Architecture\n\n\nUsage\n-----", "### Direct Usage (Sentence Transformers)\n\n\nFirst install the Sentence Transformers library:\n\n\nThen you can load this model and run inference.\n\n\nEvaluation\n----------", "### Metrics", "#### Semantic Similarity\n\n\n* Dataset: 'sts-dev'\n* Evaluated with `EmbeddingSimilarityEvaluator`", "#### Semantic Similarity\n\n\n* Dataset: 'sts-test'\n* Evaluated with `EmbeddingSimilarityEvaluator`\n\n\n\nTraining Details\n----------------", "### Training Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 5,749 training samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Evaluation Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 1,500 evaluation samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Training Hyperparameters", "#### Non-Default Hyperparameters\n\n\n* 'eval\\_strategy': steps\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'num\\_train\\_epochs': 1\n* 'warmup\\_ratio': 0.1\n* 'fp16': True", "#### All Hyperparameters\n\n\nClick to expand\n* 'overwrite\\_output\\_dir': False\n* 'do\\_predict': False\n* 'eval\\_strategy': steps\n* 'prediction\\_loss\\_only': False\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'per\\_gpu\\_train\\_batch\\_size': None\n* 'per\\_gpu\\_eval\\_batch\\_size': None\n* 'gradient\\_accumulation\\_steps': 1\n* 'eval\\_accumulation\\_steps': None\n* 'learning\\_rate': 5e-05\n* 'weight\\_decay': 0.0\n* 'adam\\_beta1': 0.9\n* 'adam\\_beta2': 0.999\n* 'adam\\_epsilon': 1e-08\n* 'max\\_grad\\_norm': 1.0\n* 'num\\_train\\_epochs': 1\n* 'max\\_steps': -1\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_kwargs': {}\n* 'warmup\\_ratio': 0.1\n* 'warmup\\_steps': 0\n* 'log\\_level': passive\n* 'log\\_level\\_replica': warning\n* 'log\\_on\\_each\\_node': True\n* 'logging\\_nan\\_inf\\_filter': True\n* 'save\\_safetensors': True\n* 'save\\_on\\_each\\_node': False\n* 'save\\_only\\_model': False\n* 'no\\_cuda': False\n* 'use\\_cpu': False\n* 'use\\_mps\\_device': False\n* 'seed': 42\n* 'data\\_seed': None\n* 'jit\\_mode\\_eval': False\n* 'use\\_ipex': False\n* 'bf16': False\n* 'fp16': True\n* 'fp16\\_opt\\_level': O1\n* 'half\\_precision\\_backend': auto\n* 'bf16\\_full\\_eval': False\n* 'fp16\\_full\\_eval': False\n* 'tf32': None\n* 'local\\_rank': 0\n* 'ddp\\_backend': None\n* 'tpu\\_num\\_cores': None\n* 'tpu\\_metrics\\_debug': False\n* 'debug': []\n* 'dataloader\\_drop\\_last': False\n* 'dataloader\\_num\\_workers': 0\n* 'dataloader\\_prefetch\\_factor': None\n* 'past\\_index': -1\n* 'disable\\_tqdm': False\n* 'remove\\_unused\\_columns': True\n* 'label\\_names': None\n* 'load\\_best\\_model\\_at\\_end': False\n* 'ignore\\_data\\_skip': False\n* 'fsdp': []\n* 'fsdp\\_min\\_num\\_params': 0\n* 'fsdp\\_config': {'min\\_num\\_params': 0, 'xla': False, 'xla\\_fsdp\\_v2': False, 'xla\\_fsdp\\_grad\\_ckpt': False}\n* 'fsdp\\_transformer\\_layer\\_cls\\_to\\_wrap': None\n* 'accelerator\\_config': {'split\\_batches': False, 'dispatch\\_batches': None, 'even\\_batches': True, 'use\\_seedable\\_sampler': True, 'non\\_blocking': False, 'gradient\\_accumulation\\_kwargs': None}\n* 'deepspeed': None\n* 'label\\_smoothing\\_factor': 0.0\n* 'optim': adamw\\_torch\n* 'optim\\_args': None\n* 'adafactor': False\n* 'group\\_by\\_length': False\n* 'length\\_column\\_name': length\n* 'ddp\\_find\\_unused\\_parameters': None\n* 'ddp\\_bucket\\_cap\\_mb': None\n* 'ddp\\_broadcast\\_buffers': None\n* 'dataloader\\_pin\\_memory': True\n* 'dataloader\\_persistent\\_workers': False\n* 'skip\\_memory\\_metrics': True\n* 'use\\_legacy\\_prediction\\_loop': False\n* 'push\\_to\\_hub': False\n* 'resume\\_from\\_checkpoint': None\n* 'hub\\_model\\_id': None\n* 'hub\\_strategy': every\\_save\n* 'hub\\_private\\_repo': False\n* 'hub\\_always\\_push': False\n* 'gradient\\_checkpointing': False\n* 'gradient\\_checkpointing\\_kwargs': None\n* 'include\\_inputs\\_for\\_metrics': False\n* 'eval\\_do\\_concat\\_batches': True\n* 'fp16\\_backend': auto\n* 'push\\_to\\_hub\\_model\\_id': None\n* 'push\\_to\\_hub\\_organization': None\n* 'mp\\_parameters':\n* 'auto\\_find\\_batch\\_size': False\n* 'full\\_determinism': False\n* 'torchdynamo': None\n* 'ray\\_scope': last\n* 'ddp\\_timeout': 1800\n* 'torch\\_compile': False\n* 'torch\\_compile\\_backend': None\n* 'torch\\_compile\\_mode': None\n* 'dispatch\\_batches': None\n* 'split\\_batches': None\n* 'include\\_tokens\\_per\\_second': False\n* 'include\\_num\\_input\\_tokens\\_seen': False\n* 'neftune\\_noise\\_alpha': None\n* 'optim\\_target\\_modules': None\n* 'batch\\_sampler': batch\\_sampler\n* 'multi\\_dataset\\_batch\\_sampler': proportional", "### Training Logs", "### Environmental Impact\n\n\nCarbon emissions were measured using CodeCarbon.\n\n\n* Energy Consumed: 0.000 kWh\n* Carbon Emitted: 0.000 kg of CO2\n* Hours Used: 0.002 hours", "### Training Hardware\n\n\n* On Cloud: No\n* GPU Model: 1 x NVIDIA GeForce RTX 3090\n* CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K\n* RAM Size: 31.78 GB", "### Framework Versions\n\n\n* Python: 3.11.6\n* Sentence Transformers: 3.0.0.dev0\n* Transformers: 4.41.0.dev0\n* PyTorch: 2.3.0+cu121\n* Accelerate: 0.26.1\n* Datasets: 2.18.0\n* Tokenizers: 0.19.1", "### BibTeX", "#### Sentence Transformers" ]
text-generation
transformers
# mlabonne/ChimeraLlama-3-8B AWQ - Model creator: [mlabonne](https://huggingface.co/mlabonne) - Original model: [ChimeraLlama-3-8B](https://huggingface.co/mlabonne/ChimeraLlama-3-8B) ## Model Summary ChimeraLlama-3-8B outperforms Llama 3 8B Instruct on Nous' benchmark suite. ChimeraLlama-3-8B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) * [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) * [Locutusque/Llama-3-Orca-1.0-8B](https://huggingface.co/Locutusque/Llama-3-Orca-1.0-8B) * [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B) ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
{"license": "other", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "merge", "mergekit", "lazymergekit", "llama"], "base_model": ["NousResearch/Meta-Llama-3-8B-Instruct", "mlabonne/OrpoLlama-3-8B", "Locutusque/Llama-3-Orca-1.0-8B", "abacusai/Llama-3-Smaug-8B"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
solidrust/ChimeraLlama-3-8B-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "merge", "mergekit", "lazymergekit", "conversational", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "base_model:mlabonne/OrpoLlama-3-8B", "base_model:Locutusque/Llama-3-Orca-1.0-8B", "base_model:abacusai/Llama-3-Smaug-8B", "license:other", "text-generation-inference", "region:us" ]
null
2024-04-25T15:39:12+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #merge #mergekit #lazymergekit #conversational #base_model-NousResearch/Meta-Llama-3-8B-Instruct #base_model-mlabonne/OrpoLlama-3-8B #base_model-Locutusque/Llama-3-Orca-1.0-8B #base_model-abacusai/Llama-3-Smaug-8B #license-other #text-generation-inference #region-us
# mlabonne/ChimeraLlama-3-8B AWQ - Model creator: mlabonne - Original model: ChimeraLlama-3-8B ## Model Summary ChimeraLlama-3-8B outperforms Llama 3 8B Instruct on Nous' benchmark suite. ChimeraLlama-3-8B is a merge of the following models using LazyMergekit: * NousResearch/Meta-Llama-3-8B-Instruct * mlabonne/OrpoLlama-3-8B * Locutusque/Llama-3-Orca-1.0-8B * abacusai/Llama-3-Smaug-8B ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - Text Generation Webui - using Loader: AutoAWQ - vLLM - version 0.2.2 or later for support for all model types. - Hugging Face Text Generation Inference (TGI) - Transformers version 4.35.0 and later, from any code or client that supports Transformers - AutoAWQ - for use from Python code
[ "# mlabonne/ChimeraLlama-3-8B AWQ\n\n- Model creator: mlabonne\n- Original model: ChimeraLlama-3-8B", "## Model Summary\n\nChimeraLlama-3-8B outperforms Llama 3 8B Instruct on Nous' benchmark suite.\n\nChimeraLlama-3-8B is a merge of the following models using LazyMergekit:\n* NousResearch/Meta-Llama-3-8B-Instruct\n* mlabonne/OrpoLlama-3-8B\n* Locutusque/Llama-3-Orca-1.0-8B\n* abacusai/Llama-3-Smaug-8B", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #merge #mergekit #lazymergekit #conversational #base_model-NousResearch/Meta-Llama-3-8B-Instruct #base_model-mlabonne/OrpoLlama-3-8B #base_model-Locutusque/Llama-3-Orca-1.0-8B #base_model-abacusai/Llama-3-Smaug-8B #license-other #text-generation-inference #region-us \n", "# mlabonne/ChimeraLlama-3-8B AWQ\n\n- Model creator: mlabonne\n- Original model: ChimeraLlama-3-8B", "## Model Summary\n\nChimeraLlama-3-8B outperforms Llama 3 8B Instruct on Nous' benchmark suite.\n\nChimeraLlama-3-8B is a merge of the following models using LazyMergekit:\n* NousResearch/Meta-Llama-3-8B-Instruct\n* mlabonne/OrpoLlama-3-8B\n* Locutusque/Llama-3-Orca-1.0-8B\n* abacusai/Llama-3-Smaug-8B", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
object-detection
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/qubvel-hf-co/transformers-detection-model-finetuning-cppe5/runs/hgyy7wjo) # facebook-detr-resnet-50-finetuned-10k-cppe5-manual-pad This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 1 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.18.0 - Tokenizers 0.19.0
{"license": "apache-2.0", "tags": ["object-detection", "vision", "generated_from_trainer"], "base_model": "facebook/detr-resnet-50", "model-index": [{"name": "facebook-detr-resnet-50-finetuned-10k-cppe5-manual-pad", "results": []}]}
qubvel-hf/facebook-detr-resnet-50-finetuned-10k-cppe5-manual-pad
null
[ "transformers", "safetensors", "detr", "object-detection", "vision", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T15:40:15+00:00
[]
[]
TAGS #transformers #safetensors #detr #object-detection #vision #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us
<img src="URL alt="Visualize in Weights & Biases" width="200" height="32"/> # facebook-detr-resnet-50-finetuned-10k-cppe5-manual-pad This model is a fine-tuned version of facebook/detr-resnet-50 on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 1 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.18.0 - Tokenizers 0.19.0
[ "# facebook-detr-resnet-50-finetuned-10k-cppe5-manual-pad\n\nThis model is a fine-tuned version of facebook/detr-resnet-50 on the cppe-5 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 1\n- seed: 1337\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 100.0\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 1.13.0+cu117\n- Datasets 2.18.0\n- Tokenizers 0.19.0" ]
[ "TAGS\n#transformers #safetensors #detr #object-detection #vision #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us \n", "# facebook-detr-resnet-50-finetuned-10k-cppe5-manual-pad\n\nThis model is a fine-tuned version of facebook/detr-resnet-50 on the cppe-5 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 1\n- seed: 1337\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 100.0\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 1.13.0+cu117\n- Datasets 2.18.0\n- Tokenizers 0.19.0" ]
text2text-generation
transformers
bfloat16 safetensors conversion of https://huggingface.co/google/t5_xxl_true_nli_mixture
{"language": ["en"], "license": "apache-2.0", "datasets": ["tals/vitaminc", "SetFit/mnli", "snli", "fever", "paws", "scitail"]}
tdolega/t5_xxl_true_nli_mixture-bf16
null
[ "transformers", "safetensors", "t5", "text2text-generation", "en", "dataset:tals/vitaminc", "dataset:SetFit/mnli", "dataset:snli", "dataset:fever", "dataset:paws", "dataset:scitail", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T15:41:03+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #t5 #text2text-generation #en #dataset-tals/vitaminc #dataset-SetFit/mnli #dataset-snli #dataset-fever #dataset-paws #dataset-scitail #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
bfloat16 safetensors conversion of URL
[]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #en #dataset-tals/vitaminc #dataset-SetFit/mnli #dataset-snli #dataset-fever #dataset-paws #dataset-scitail #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Asmaamaghraby/historyqa_model This model is a fine-tuned version of [aubmindlab/bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.1424 - Validation Loss: 3.0324 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 42, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.6890 | 3.5754 | 0 | | 3.3347 | 3.0324 | 1 | | 3.1365 | 3.0324 | 2 | | 3.1489 | 3.0324 | 3 | | 3.1397 | 3.0324 | 4 | | 3.1409 | 3.0324 | 5 | | 3.1439 | 3.0324 | 6 | | 3.1297 | 3.0324 | 7 | | 3.1456 | 3.0324 | 8 | | 3.1424 | 3.0324 | 9 | ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_keras_callback"], "base_model": "aubmindlab/bert-base-arabertv2", "model-index": [{"name": "Asmaamaghraby/historyqa_model", "results": []}]}
Asmaamaghraby/historyqa_model
null
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "base_model:aubmindlab/bert-base-arabertv2", "endpoints_compatible", "region:us" ]
null
2024-04-25T15:41:33+00:00
[]
[]
TAGS #transformers #tf #bert #question-answering #generated_from_keras_callback #base_model-aubmindlab/bert-base-arabertv2 #endpoints_compatible #region-us
Asmaamaghraby/historyqa\_model ============================== This model is a fine-tuned version of aubmindlab/bert-base-arabertv2 on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 3.1424 * Validation Loss: 3.0324 * Epoch: 9 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 42, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.40.0 * TensorFlow 2.15.0 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 42, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tf #bert #question-answering #generated_from_keras_callback #base_model-aubmindlab/bert-base-arabertv2 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 42, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
object-detection
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4503 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5641 | 0.8 | 1000 | 1.4503 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/detr-resnet-50", "model-index": [{"name": "detr", "results": []}]}
MarkoLillemagi/detr
null
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T15:42:52+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us
detr ==== This model is a fine-tuned version of facebook/detr-resnet-50 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.4503 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
quickstep3621/zvnuskx
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T15:43:44+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
liquid9212/g6n20gi
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T15:43:47+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [cilantro9246/le6l0kb](https://huggingface.co/cilantro9246/le6l0kb) * [Grayx/sad_llama_38](https://huggingface.co/Grayx/sad_llama_38) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Grayx/sad_llama_38 layer_range: [0, 32] - model: cilantro9246/le6l0kb layer_range: [0, 32] merge_method: slerp base_model: cilantro9246/le6l0kb parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["cilantro9246/le6l0kb", "Grayx/sad_llama_38"]}
Sumail/Chalice1
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:cilantro9246/le6l0kb", "base_model:Grayx/sad_llama_38", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T15:43:56+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-cilantro9246/le6l0kb #base_model-Grayx/sad_llama_38 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * cilantro9246/le6l0kb * Grayx/sad_llama_38 ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* cilantro9246/le6l0kb\n* Grayx/sad_llama_38", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-cilantro9246/le6l0kb #base_model-Grayx/sad_llama_38 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* cilantro9246/le6l0kb\n* Grayx/sad_llama_38", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
sentence-similarity
sentence-transformers
# SentenceTransformer based on google-bert/bert-base-uncased This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): CNN( (convs): ModuleList( (0): Conv1d(768, 256, kernel_size=(1,), stride=(1,)) (1): Conv1d(768, 256, kernel_size=(3,), stride=(1,), padding=(1,)) (2): Conv1d(768, 256, kernel_size=(5,), stride=(1,), padding=(2,)) ) ) (2): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("tomaarsen/bert-base-uncased-cnn") # Run inference sentences = [ 'A person makes fire.', 'The person is starting a fire.', 'Blast on Indian train kills one', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8518 | | **spearman_cosine** | **0.8525** | | pearson_manhattan | 0.8009 | | spearman_manhattan | 0.8052 | | pearson_euclidean | 0.8007 | | spearman_euclidean | 0.8053 | | pearson_dot | 0.7449 | | spearman_dot | 0.7559 | | pearson_max | 0.8518 | | spearman_max | 0.8525 | #### Semantic Similarity * Dataset: `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8302 | | **spearman_cosine** | **0.8232** | | pearson_manhattan | 0.8082 | | spearman_manhattan | 0.801 | | pearson_euclidean | 0.8075 | | spearman_euclidean | 0.8001 | | pearson_dot | 0.7172 | | spearman_dot | 0.7096 | | pearson_max | 0.8302 | | spearman_max | 0.8232 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### sentence-transformers/stsb * Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [d999f12](https://huggingface.co/datasets/sentence-transformers/stsb/tree/d999f12281623b0925506817d9bd85e88289218a) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 10.0 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.95 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------| | <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> | | <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> | | <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Evaluation Dataset #### sentence-transformers/stsb * Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [d999f12](https://huggingface.co/datasets/sentence-transformers/stsb/tree/d999f12281623b0925506817d9bd85e88289218a) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 5 tokens</li><li>mean: 15.1 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.11 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------|:------------------------------------------------------|:------------------| | <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> | | <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> | | <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: False - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: None - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine | sts-test_spearman_cosine | |:------:|:----:|:-------------:|:------:|:-----------------------:|:------------------------:| | 0.5556 | 100 | 0.0417 | 0.0304 | 0.8525 | - | | 1.0 | 180 | - | - | - | 0.8232 | ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.003 kWh - **Carbon Emitted**: 0.001 kg of CO2 - **Hours Used**: 0.014 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 3.0.0.dev0 - Transformers: 4.41.0.dev0 - PyTorch: 2.3.0+cu121 - Accelerate: 0.26.1 - Datasets: 2.18.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"language": ["en"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "loss:CosineSimilarityLoss"], "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "base_model": "google-bert/bert-base-uncased", "widget": [{"source_sentence": "A man is speaking.", "sentences": ["A man is talking on a phone.", "The boy is jumping into a lake.", "A cat is pouncing on a trampoline."]}, {"source_sentence": "A woman is reading.", "sentences": ["A woman is writing something.", "A woman is applying eye shadow.", "A tiger is walking around his cage."]}, {"source_sentence": "A baby is laughing.", "sentences": ["The baby laughed in his car seat.", "A green bus drives down a road.", "A woman is applying eye shadow."]}, {"source_sentence": "A man jumping rope", "sentences": ["A man is climbing a rope.", "The boy is jumping into a lake.", "Two women sitting in lawn chairs."]}, {"source_sentence": "A person makes fire.", "sentences": ["The person is starting a fire.", "Blast on Indian train kills one", "An animal is chewing on something."]}], "pipeline_tag": "sentence-similarity", "co2_eq_emissions": {"emissions": 1.1600350080390396, "energy_consumed": 0.002984381371948278, "source": "codecarbon", "training_type": "fine-tuning", "on_cloud": false, "cpu_model": "13th Gen Intel(R) Core(TM) i7-13700K", "ram_total_size": 31.777088165283203, "hours_used": 0.014, "hardware_used": "1 x NVIDIA GeForce RTX 3090"}, "model-index": [{"name": "SentenceTransformer based on google-bert/bert-base-uncased", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts dev", "type": "sts-dev"}, "metrics": [{"type": "pearson_cosine", "value": 0.8517529845876077, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.8524623532914918, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.800899823827701, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.8051568979113306, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.8006826117948451, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.8053116182840467, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.7449289216960278, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.7558824436512839, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.8517529845876077, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.8524623532914918, "name": "Spearman Max"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts test", "type": "sts-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.83020870287088, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.823188318981985, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.8082481232573683, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.8009567692854708, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.8074730784388158, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.8001358594920889, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.7172194732542608, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.7095712222240558, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.83020870287088, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.823188318981985, "name": "Spearman Max"}]}]}]}
tomaarsen/bert-base-uncased-cnn
null
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "loss:CosineSimilarityLoss", "en", "arxiv:1908.10084", "base_model:google-bert/bert-base-uncased", "model-index", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2024-04-25T15:46:08+00:00
[ "1908.10084" ]
[ "en" ]
TAGS #sentence-transformers #safetensors #bert #sentence-similarity #feature-extraction #loss-CosineSimilarityLoss #en #arxiv-1908.10084 #base_model-google-bert/bert-base-uncased #model-index #co2_eq_emissions #endpoints_compatible #region-us
SentenceTransformer based on google-bert/bert-base-uncased ========================================================== This is a sentence-transformers model finetuned from google-bert/bert-base-uncased on the sentence-transformers/stsb dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. Model Details ------------- ### Model Description * Model Type: Sentence Transformer * Base model: google-bert/bert-base-uncased * Maximum Sequence Length: 512 tokens * Output Dimensionality: 768 tokens * Similarity Function: Cosine Similarity * Training Dataset: + sentence-transformers/stsb * Language: en ### Model Sources * Documentation: Sentence Transformers Documentation * Repository: Sentence Transformers on GitHub * Hugging Face: Sentence Transformers on Hugging Face ### Full Model Architecture Usage ----- ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: Then you can load this model and run inference. Evaluation ---------- ### Metrics #### Semantic Similarity * Dataset: 'sts-dev' * Evaluated with `EmbeddingSimilarityEvaluator` #### Semantic Similarity * Dataset: 'sts-test' * Evaluated with `EmbeddingSimilarityEvaluator` Training Details ---------------- ### Training Dataset #### sentence-transformers/stsb * Dataset: sentence-transformers/stsb at d999f12 * Size: 5,749 training samples * Columns: `sentence1`, `sentence2`, and `score` * Approximate statistics based on the first 1000 samples: * Samples: * Loss: `CosineSimilarityLoss` with these parameters: ### Evaluation Dataset #### sentence-transformers/stsb * Dataset: sentence-transformers/stsb at d999f12 * Size: 1,500 evaluation samples * Columns: `sentence1`, `sentence2`, and `score` * Approximate statistics based on the first 1000 samples: * Samples: * Loss: `CosineSimilarityLoss` with these parameters: ### Training Hyperparameters #### Non-Default Hyperparameters * 'eval\_strategy': steps * 'per\_device\_train\_batch\_size': 32 * 'per\_device\_eval\_batch\_size': 32 * 'num\_train\_epochs': 1 * 'warmup\_ratio': 0.1 * 'fp16': True #### All Hyperparameters Click to expand * 'overwrite\_output\_dir': False * 'do\_predict': False * 'eval\_strategy': steps * 'prediction\_loss\_only': False * 'per\_device\_train\_batch\_size': 32 * 'per\_device\_eval\_batch\_size': 32 * 'per\_gpu\_train\_batch\_size': None * 'per\_gpu\_eval\_batch\_size': None * 'gradient\_accumulation\_steps': 1 * 'eval\_accumulation\_steps': None * 'learning\_rate': 5e-05 * 'weight\_decay': 0.0 * 'adam\_beta1': 0.9 * 'adam\_beta2': 0.999 * 'adam\_epsilon': 1e-08 * 'max\_grad\_norm': 1.0 * 'num\_train\_epochs': 1 * 'max\_steps': -1 * 'lr\_scheduler\_type': linear * 'lr\_scheduler\_kwargs': {} * 'warmup\_ratio': 0.1 * 'warmup\_steps': 0 * 'log\_level': passive * 'log\_level\_replica': warning * 'log\_on\_each\_node': True * 'logging\_nan\_inf\_filter': True * 'save\_safetensors': True * 'save\_on\_each\_node': False * 'save\_only\_model': False * 'no\_cuda': False * 'use\_cpu': False * 'use\_mps\_device': False * 'seed': 42 * 'data\_seed': None * 'jit\_mode\_eval': False * 'use\_ipex': False * 'bf16': False * 'fp16': True * 'fp16\_opt\_level': O1 * 'half\_precision\_backend': auto * 'bf16\_full\_eval': False * 'fp16\_full\_eval': False * 'tf32': None * 'local\_rank': 0 * 'ddp\_backend': None * 'tpu\_num\_cores': None * 'tpu\_metrics\_debug': False * 'debug': [] * 'dataloader\_drop\_last': False * 'dataloader\_num\_workers': 0 * 'dataloader\_prefetch\_factor': None * 'past\_index': -1 * 'disable\_tqdm': False * 'remove\_unused\_columns': True * 'label\_names': None * 'load\_best\_model\_at\_end': False * 'ignore\_data\_skip': False * 'fsdp': [] * 'fsdp\_min\_num\_params': 0 * 'fsdp\_config': {'min\_num\_params': 0, 'xla': False, 'xla\_fsdp\_v2': False, 'xla\_fsdp\_grad\_ckpt': False} * 'fsdp\_transformer\_layer\_cls\_to\_wrap': None * 'accelerator\_config': {'split\_batches': False, 'dispatch\_batches': None, 'even\_batches': True, 'use\_seedable\_sampler': True, 'non\_blocking': False, 'gradient\_accumulation\_kwargs': None} * 'deepspeed': None * 'label\_smoothing\_factor': 0.0 * 'optim': adamw\_torch * 'optim\_args': None * 'adafactor': False * 'group\_by\_length': False * 'length\_column\_name': length * 'ddp\_find\_unused\_parameters': None * 'ddp\_bucket\_cap\_mb': None * 'ddp\_broadcast\_buffers': None * 'dataloader\_pin\_memory': True * 'dataloader\_persistent\_workers': False * 'skip\_memory\_metrics': True * 'use\_legacy\_prediction\_loop': False * 'push\_to\_hub': False * 'resume\_from\_checkpoint': None * 'hub\_model\_id': None * 'hub\_strategy': every\_save * 'hub\_private\_repo': False * 'hub\_always\_push': False * 'gradient\_checkpointing': False * 'gradient\_checkpointing\_kwargs': None * 'include\_inputs\_for\_metrics': False * 'eval\_do\_concat\_batches': True * 'fp16\_backend': auto * 'push\_to\_hub\_model\_id': None * 'push\_to\_hub\_organization': None * 'mp\_parameters': * 'auto\_find\_batch\_size': False * 'full\_determinism': False * 'torchdynamo': None * 'ray\_scope': last * 'ddp\_timeout': 1800 * 'torch\_compile': False * 'torch\_compile\_backend': None * 'torch\_compile\_mode': None * 'dispatch\_batches': None * 'split\_batches': None * 'include\_tokens\_per\_second': False * 'include\_num\_input\_tokens\_seen': False * 'neftune\_noise\_alpha': None * 'optim\_target\_modules': None * 'batch\_sampler': batch\_sampler * 'multi\_dataset\_batch\_sampler': proportional ### Training Logs ### Environmental Impact Carbon emissions were measured using CodeCarbon. * Energy Consumed: 0.003 kWh * Carbon Emitted: 0.001 kg of CO2 * Hours Used: 0.014 hours ### Training Hardware * On Cloud: No * GPU Model: 1 x NVIDIA GeForce RTX 3090 * CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K * RAM Size: 31.78 GB ### Framework Versions * Python: 3.11.6 * Sentence Transformers: 3.0.0.dev0 * Transformers: 4.41.0.dev0 * PyTorch: 2.3.0+cu121 * Accelerate: 0.26.1 * Datasets: 2.18.0 * Tokenizers: 0.19.1 ### BibTeX #### Sentence Transformers
[ "### Model Description\n\n\n* Model Type: Sentence Transformer\n* Base model: google-bert/bert-base-uncased\n* Maximum Sequence Length: 512 tokens\n* Output Dimensionality: 768 tokens\n* Similarity Function: Cosine Similarity\n* Training Dataset:\n\t+ sentence-transformers/stsb\n* Language: en", "### Model Sources\n\n\n* Documentation: Sentence Transformers Documentation\n* Repository: Sentence Transformers on GitHub\n* Hugging Face: Sentence Transformers on Hugging Face", "### Full Model Architecture\n\n\nUsage\n-----", "### Direct Usage (Sentence Transformers)\n\n\nFirst install the Sentence Transformers library:\n\n\nThen you can load this model and run inference.\n\n\nEvaluation\n----------", "### Metrics", "#### Semantic Similarity\n\n\n* Dataset: 'sts-dev'\n* Evaluated with `EmbeddingSimilarityEvaluator`", "#### Semantic Similarity\n\n\n* Dataset: 'sts-test'\n* Evaluated with `EmbeddingSimilarityEvaluator`\n\n\n\nTraining Details\n----------------", "### Training Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 5,749 training samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Evaluation Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 1,500 evaluation samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Training Hyperparameters", "#### Non-Default Hyperparameters\n\n\n* 'eval\\_strategy': steps\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'num\\_train\\_epochs': 1\n* 'warmup\\_ratio': 0.1\n* 'fp16': True", "#### All Hyperparameters\n\n\nClick to expand\n* 'overwrite\\_output\\_dir': False\n* 'do\\_predict': False\n* 'eval\\_strategy': steps\n* 'prediction\\_loss\\_only': False\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'per\\_gpu\\_train\\_batch\\_size': None\n* 'per\\_gpu\\_eval\\_batch\\_size': None\n* 'gradient\\_accumulation\\_steps': 1\n* 'eval\\_accumulation\\_steps': None\n* 'learning\\_rate': 5e-05\n* 'weight\\_decay': 0.0\n* 'adam\\_beta1': 0.9\n* 'adam\\_beta2': 0.999\n* 'adam\\_epsilon': 1e-08\n* 'max\\_grad\\_norm': 1.0\n* 'num\\_train\\_epochs': 1\n* 'max\\_steps': -1\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_kwargs': {}\n* 'warmup\\_ratio': 0.1\n* 'warmup\\_steps': 0\n* 'log\\_level': passive\n* 'log\\_level\\_replica': warning\n* 'log\\_on\\_each\\_node': True\n* 'logging\\_nan\\_inf\\_filter': True\n* 'save\\_safetensors': True\n* 'save\\_on\\_each\\_node': False\n* 'save\\_only\\_model': False\n* 'no\\_cuda': False\n* 'use\\_cpu': False\n* 'use\\_mps\\_device': False\n* 'seed': 42\n* 'data\\_seed': None\n* 'jit\\_mode\\_eval': False\n* 'use\\_ipex': False\n* 'bf16': False\n* 'fp16': True\n* 'fp16\\_opt\\_level': O1\n* 'half\\_precision\\_backend': auto\n* 'bf16\\_full\\_eval': False\n* 'fp16\\_full\\_eval': False\n* 'tf32': None\n* 'local\\_rank': 0\n* 'ddp\\_backend': None\n* 'tpu\\_num\\_cores': None\n* 'tpu\\_metrics\\_debug': False\n* 'debug': []\n* 'dataloader\\_drop\\_last': False\n* 'dataloader\\_num\\_workers': 0\n* 'dataloader\\_prefetch\\_factor': None\n* 'past\\_index': -1\n* 'disable\\_tqdm': False\n* 'remove\\_unused\\_columns': True\n* 'label\\_names': None\n* 'load\\_best\\_model\\_at\\_end': False\n* 'ignore\\_data\\_skip': False\n* 'fsdp': []\n* 'fsdp\\_min\\_num\\_params': 0\n* 'fsdp\\_config': {'min\\_num\\_params': 0, 'xla': False, 'xla\\_fsdp\\_v2': False, 'xla\\_fsdp\\_grad\\_ckpt': False}\n* 'fsdp\\_transformer\\_layer\\_cls\\_to\\_wrap': None\n* 'accelerator\\_config': {'split\\_batches': False, 'dispatch\\_batches': None, 'even\\_batches': True, 'use\\_seedable\\_sampler': True, 'non\\_blocking': False, 'gradient\\_accumulation\\_kwargs': None}\n* 'deepspeed': None\n* 'label\\_smoothing\\_factor': 0.0\n* 'optim': adamw\\_torch\n* 'optim\\_args': None\n* 'adafactor': False\n* 'group\\_by\\_length': False\n* 'length\\_column\\_name': length\n* 'ddp\\_find\\_unused\\_parameters': None\n* 'ddp\\_bucket\\_cap\\_mb': None\n* 'ddp\\_broadcast\\_buffers': None\n* 'dataloader\\_pin\\_memory': True\n* 'dataloader\\_persistent\\_workers': False\n* 'skip\\_memory\\_metrics': True\n* 'use\\_legacy\\_prediction\\_loop': False\n* 'push\\_to\\_hub': False\n* 'resume\\_from\\_checkpoint': None\n* 'hub\\_model\\_id': None\n* 'hub\\_strategy': every\\_save\n* 'hub\\_private\\_repo': False\n* 'hub\\_always\\_push': False\n* 'gradient\\_checkpointing': False\n* 'gradient\\_checkpointing\\_kwargs': None\n* 'include\\_inputs\\_for\\_metrics': False\n* 'eval\\_do\\_concat\\_batches': True\n* 'fp16\\_backend': auto\n* 'push\\_to\\_hub\\_model\\_id': None\n* 'push\\_to\\_hub\\_organization': None\n* 'mp\\_parameters':\n* 'auto\\_find\\_batch\\_size': False\n* 'full\\_determinism': False\n* 'torchdynamo': None\n* 'ray\\_scope': last\n* 'ddp\\_timeout': 1800\n* 'torch\\_compile': False\n* 'torch\\_compile\\_backend': None\n* 'torch\\_compile\\_mode': None\n* 'dispatch\\_batches': None\n* 'split\\_batches': None\n* 'include\\_tokens\\_per\\_second': False\n* 'include\\_num\\_input\\_tokens\\_seen': False\n* 'neftune\\_noise\\_alpha': None\n* 'optim\\_target\\_modules': None\n* 'batch\\_sampler': batch\\_sampler\n* 'multi\\_dataset\\_batch\\_sampler': proportional", "### Training Logs", "### Environmental Impact\n\n\nCarbon emissions were measured using CodeCarbon.\n\n\n* Energy Consumed: 0.003 kWh\n* Carbon Emitted: 0.001 kg of CO2\n* Hours Used: 0.014 hours", "### Training Hardware\n\n\n* On Cloud: No\n* GPU Model: 1 x NVIDIA GeForce RTX 3090\n* CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K\n* RAM Size: 31.78 GB", "### Framework Versions\n\n\n* Python: 3.11.6\n* Sentence Transformers: 3.0.0.dev0\n* Transformers: 4.41.0.dev0\n* PyTorch: 2.3.0+cu121\n* Accelerate: 0.26.1\n* Datasets: 2.18.0\n* Tokenizers: 0.19.1", "### BibTeX", "#### Sentence Transformers" ]
[ "TAGS\n#sentence-transformers #safetensors #bert #sentence-similarity #feature-extraction #loss-CosineSimilarityLoss #en #arxiv-1908.10084 #base_model-google-bert/bert-base-uncased #model-index #co2_eq_emissions #endpoints_compatible #region-us \n", "### Model Description\n\n\n* Model Type: Sentence Transformer\n* Base model: google-bert/bert-base-uncased\n* Maximum Sequence Length: 512 tokens\n* Output Dimensionality: 768 tokens\n* Similarity Function: Cosine Similarity\n* Training Dataset:\n\t+ sentence-transformers/stsb\n* Language: en", "### Model Sources\n\n\n* Documentation: Sentence Transformers Documentation\n* Repository: Sentence Transformers on GitHub\n* Hugging Face: Sentence Transformers on Hugging Face", "### Full Model Architecture\n\n\nUsage\n-----", "### Direct Usage (Sentence Transformers)\n\n\nFirst install the Sentence Transformers library:\n\n\nThen you can load this model and run inference.\n\n\nEvaluation\n----------", "### Metrics", "#### Semantic Similarity\n\n\n* Dataset: 'sts-dev'\n* Evaluated with `EmbeddingSimilarityEvaluator`", "#### Semantic Similarity\n\n\n* Dataset: 'sts-test'\n* Evaluated with `EmbeddingSimilarityEvaluator`\n\n\n\nTraining Details\n----------------", "### Training Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 5,749 training samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Evaluation Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 1,500 evaluation samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Training Hyperparameters", "#### Non-Default Hyperparameters\n\n\n* 'eval\\_strategy': steps\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'num\\_train\\_epochs': 1\n* 'warmup\\_ratio': 0.1\n* 'fp16': True", "#### All Hyperparameters\n\n\nClick to expand\n* 'overwrite\\_output\\_dir': False\n* 'do\\_predict': False\n* 'eval\\_strategy': steps\n* 'prediction\\_loss\\_only': False\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'per\\_gpu\\_train\\_batch\\_size': None\n* 'per\\_gpu\\_eval\\_batch\\_size': None\n* 'gradient\\_accumulation\\_steps': 1\n* 'eval\\_accumulation\\_steps': None\n* 'learning\\_rate': 5e-05\n* 'weight\\_decay': 0.0\n* 'adam\\_beta1': 0.9\n* 'adam\\_beta2': 0.999\n* 'adam\\_epsilon': 1e-08\n* 'max\\_grad\\_norm': 1.0\n* 'num\\_train\\_epochs': 1\n* 'max\\_steps': -1\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_kwargs': {}\n* 'warmup\\_ratio': 0.1\n* 'warmup\\_steps': 0\n* 'log\\_level': passive\n* 'log\\_level\\_replica': warning\n* 'log\\_on\\_each\\_node': True\n* 'logging\\_nan\\_inf\\_filter': True\n* 'save\\_safetensors': True\n* 'save\\_on\\_each\\_node': False\n* 'save\\_only\\_model': False\n* 'no\\_cuda': False\n* 'use\\_cpu': False\n* 'use\\_mps\\_device': False\n* 'seed': 42\n* 'data\\_seed': None\n* 'jit\\_mode\\_eval': False\n* 'use\\_ipex': False\n* 'bf16': False\n* 'fp16': True\n* 'fp16\\_opt\\_level': O1\n* 'half\\_precision\\_backend': auto\n* 'bf16\\_full\\_eval': False\n* 'fp16\\_full\\_eval': False\n* 'tf32': None\n* 'local\\_rank': 0\n* 'ddp\\_backend': None\n* 'tpu\\_num\\_cores': None\n* 'tpu\\_metrics\\_debug': False\n* 'debug': []\n* 'dataloader\\_drop\\_last': False\n* 'dataloader\\_num\\_workers': 0\n* 'dataloader\\_prefetch\\_factor': None\n* 'past\\_index': -1\n* 'disable\\_tqdm': False\n* 'remove\\_unused\\_columns': True\n* 'label\\_names': None\n* 'load\\_best\\_model\\_at\\_end': False\n* 'ignore\\_data\\_skip': False\n* 'fsdp': []\n* 'fsdp\\_min\\_num\\_params': 0\n* 'fsdp\\_config': {'min\\_num\\_params': 0, 'xla': False, 'xla\\_fsdp\\_v2': False, 'xla\\_fsdp\\_grad\\_ckpt': False}\n* 'fsdp\\_transformer\\_layer\\_cls\\_to\\_wrap': None\n* 'accelerator\\_config': {'split\\_batches': False, 'dispatch\\_batches': None, 'even\\_batches': True, 'use\\_seedable\\_sampler': True, 'non\\_blocking': False, 'gradient\\_accumulation\\_kwargs': None}\n* 'deepspeed': None\n* 'label\\_smoothing\\_factor': 0.0\n* 'optim': adamw\\_torch\n* 'optim\\_args': None\n* 'adafactor': False\n* 'group\\_by\\_length': False\n* 'length\\_column\\_name': length\n* 'ddp\\_find\\_unused\\_parameters': None\n* 'ddp\\_bucket\\_cap\\_mb': None\n* 'ddp\\_broadcast\\_buffers': None\n* 'dataloader\\_pin\\_memory': True\n* 'dataloader\\_persistent\\_workers': False\n* 'skip\\_memory\\_metrics': True\n* 'use\\_legacy\\_prediction\\_loop': False\n* 'push\\_to\\_hub': False\n* 'resume\\_from\\_checkpoint': None\n* 'hub\\_model\\_id': None\n* 'hub\\_strategy': every\\_save\n* 'hub\\_private\\_repo': False\n* 'hub\\_always\\_push': False\n* 'gradient\\_checkpointing': False\n* 'gradient\\_checkpointing\\_kwargs': None\n* 'include\\_inputs\\_for\\_metrics': False\n* 'eval\\_do\\_concat\\_batches': True\n* 'fp16\\_backend': auto\n* 'push\\_to\\_hub\\_model\\_id': None\n* 'push\\_to\\_hub\\_organization': None\n* 'mp\\_parameters':\n* 'auto\\_find\\_batch\\_size': False\n* 'full\\_determinism': False\n* 'torchdynamo': None\n* 'ray\\_scope': last\n* 'ddp\\_timeout': 1800\n* 'torch\\_compile': False\n* 'torch\\_compile\\_backend': None\n* 'torch\\_compile\\_mode': None\n* 'dispatch\\_batches': None\n* 'split\\_batches': None\n* 'include\\_tokens\\_per\\_second': False\n* 'include\\_num\\_input\\_tokens\\_seen': False\n* 'neftune\\_noise\\_alpha': None\n* 'optim\\_target\\_modules': None\n* 'batch\\_sampler': batch\\_sampler\n* 'multi\\_dataset\\_batch\\_sampler': proportional", "### Training Logs", "### Environmental Impact\n\n\nCarbon emissions were measured using CodeCarbon.\n\n\n* Energy Consumed: 0.003 kWh\n* Carbon Emitted: 0.001 kg of CO2\n* Hours Used: 0.014 hours", "### Training Hardware\n\n\n* On Cloud: No\n* GPU Model: 1 x NVIDIA GeForce RTX 3090\n* CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K\n* RAM Size: 31.78 GB", "### Framework Versions\n\n\n* Python: 3.11.6\n* Sentence Transformers: 3.0.0.dev0\n* Transformers: 4.41.0.dev0\n* PyTorch: 2.3.0+cu121\n* Accelerate: 0.26.1\n* Datasets: 2.18.0\n* Tokenizers: 0.19.1", "### BibTeX", "#### Sentence Transformers" ]
text-generation
null
## Llamacpp imatrix Quantizations of L3-TheSpice-8b-v0.8.3 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2717">b2717</a> for quantization. Original model: https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3 All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` {System Prompt} Username: {Input} BotName: {Response} Username: {Input} BotName: {Response} ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [L3-TheSpice-8b-v0.8.3-Q8_0.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [L3-TheSpice-8b-v0.8.3-Q6_K.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [L3-TheSpice-8b-v0.8.3-Q5_K_M.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [L3-TheSpice-8b-v0.8.3-Q5_K_S.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [L3-TheSpice-8b-v0.8.3-Q4_K_M.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [L3-TheSpice-8b-v0.8.3-Q4_K_S.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [L3-TheSpice-8b-v0.8.3-IQ4_NL.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [L3-TheSpice-8b-v0.8.3-IQ4_XS.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [L3-TheSpice-8b-v0.8.3-Q3_K_L.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [L3-TheSpice-8b-v0.8.3-Q3_K_M.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [L3-TheSpice-8b-v0.8.3-IQ3_M.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [L3-TheSpice-8b-v0.8.3-IQ3_S.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [L3-TheSpice-8b-v0.8.3-Q3_K_S.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [L3-TheSpice-8b-v0.8.3-IQ3_XS.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [L3-TheSpice-8b-v0.8.3-IQ3_XXS.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [L3-TheSpice-8b-v0.8.3-Q2_K.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [L3-TheSpice-8b-v0.8.3-IQ2_M.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [L3-TheSpice-8b-v0.8.3-IQ2_S.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [L3-TheSpice-8b-v0.8.3-IQ2_XS.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | | [L3-TheSpice-8b-v0.8.3-IQ2_XXS.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. | | [L3-TheSpice-8b-v0.8.3-IQ1_M.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. | | [L3-TheSpice-8b-v0.8.3-IQ1_S.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"license": "cc-by-nc-4.0", "quantized_by": "bartowski", "pipeline_tag": "text-generation"}
bartowski/L3-TheSpice-8b-v0.8.3-GGUF
null
[ "gguf", "text-generation", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-25T15:46:16+00:00
[]
[]
TAGS #gguf #text-generation #license-cc-by-nc-4.0 #region-us
Llamacpp imatrix Quantizations of L3-TheSpice-8b-v0.8.3 ------------------------------------------------------- Using <a href="URL release <a href="URL for quantization. Original model: URL All quants made using imatrix option with dataset provided by Kalomaze here Prompt format ------------- Download a file (not the whole branch) from below: -------------------------------------------------- Which file should I choose? --------------------------- A great write up with charts showing various performances is provided by Artefact2 here The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX\_K\_X', like Q5\_K\_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: URL feature matrix But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX\_X, like IQ3\_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: URL
[]
[ "TAGS\n#gguf #text-generation #license-cc-by-nc-4.0 #region-us \n" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text_summarization_finetuned2 This model is a fine-tuned version of [Falconsai/text_summarization](https://huggingface.co/Falconsai/text_summarization) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3126 - Rouge1: 0.0675 - Rouge2: 0.0578 - Rougel: 0.0674 - Rougelsum: 0.0674 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.4139 | 1.0 | 2000 | 0.3411 | 0.0632 | 0.0524 | 0.0632 | 0.0632 | 19.0 | | 0.3635 | 2.0 | 4000 | 0.3215 | 0.0658 | 0.0557 | 0.0658 | 0.0658 | 19.0 | | 0.348 | 3.0 | 6000 | 0.3146 | 0.0668 | 0.0571 | 0.0668 | 0.0668 | 19.0 | | 0.3445 | 4.0 | 8000 | 0.3126 | 0.0675 | 0.0578 | 0.0674 | 0.0674 | 19.0 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "Falconsai/text_summarization", "model-index": [{"name": "text_summarization_finetuned2", "results": []}]}
HARDYCHEN/text_summarization_finetuned2
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Falconsai/text_summarization", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T15:46:27+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-Falconsai/text_summarization #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
text\_summarization\_finetuned2 =============================== This model is a fine-tuned version of Falconsai/text\_summarization on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.3126 * Rouge1: 0.0675 * Rouge2: 0.0578 * Rougel: 0.0674 * Rougelsum: 0.0674 * Gen Len: 19.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * distributed\_type: multi-GPU * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 4 ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.16.1 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-Falconsai/text_summarization #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
marcagve18/baby-face-generation
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-25T15:46:38+00:00
[ "1910.09700" ]
[]
TAGS #diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
sentence-similarity
sentence-transformers
# SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained on the [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) dataset. It maps sentences & paragraphs to a 300-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 1000000 tokens - **Output Dimensionality:** 300 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(400001, 300) ) (1): WordWeights( (emb_layer): Embedding(400001, 1) ) (2): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (3): Dense({'in_features': 300, 'out_features': 300, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) (4): Dense({'in_features': 300, 'out_features': 300, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("tomaarsen/glove-wikipedia-tf-idf") # Run inference sentences = [ 'A woman is dancing.', 'A man is dancing.', 'A brown horse in a green field.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 300] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.7572 | | **spearman_cosine** | **0.7532** | | pearson_manhattan | 0.717 | | spearman_manhattan | 0.7273 | | pearson_euclidean | 0.717 | | spearman_euclidean | 0.7268 | | pearson_dot | 0.5785 | | spearman_dot | 0.6221 | | pearson_max | 0.7572 | | spearman_max | 0.7532 | #### Semantic Similarity * Dataset: `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6689 | | **spearman_cosine** | **0.6405** | | pearson_manhattan | 0.6177 | | spearman_manhattan | 0.6152 | | pearson_euclidean | 0.6185 | | spearman_euclidean | 0.6163 | | pearson_dot | 0.5093 | | spearman_dot | 0.5194 | | pearson_max | 0.6689 | | spearman_max | 0.6405 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### sentence-transformers/stsb * Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [d999f12](https://huggingface.co/datasets/sentence-transformers/stsb/tree/d999f12281623b0925506817d9bd85e88289218a) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 1 tokens</li><li>mean: 3.38 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 1 tokens</li><li>mean: 3.39 tokens</li><li>max: 10 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------| | <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> | | <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> | | <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Evaluation Dataset #### sentence-transformers/stsb * Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [d999f12](https://huggingface.co/datasets/sentence-transformers/stsb/tree/d999f12281623b0925506817d9bd85e88289218a) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 1 tokens</li><li>mean: 5.17 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 1 tokens</li><li>mean: 5.08 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------|:------------------------------------------------------|:------------------| | <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> | | <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> | | <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: False - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: None - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine | sts-test_spearman_cosine | |:------:|:----:|:-------------:|:------:|:-----------------------:|:------------------------:| | 0.5556 | 100 | 0.0819 | 0.0584 | 0.7532 | - | | 1.0 | 180 | - | - | - | 0.6405 | ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.000 kWh - **Carbon Emitted**: 0.000 kg of CO2 - **Hours Used**: 0.009 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 3.0.0.dev0 - Transformers: 4.41.0.dev0 - PyTorch: 2.3.0+cu121 - Accelerate: 0.26.1 - Datasets: 2.18.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"language": ["en"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "loss:CosineSimilarityLoss"], "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "widget": [{"source_sentence": "Women are running.", "sentences": ["Women are running.", "A brown horse in a green field.", "A man plays the guitar and sings."]}, {"source_sentence": "A baby is laughing.", "sentences": ["A baby is crawling happily.", "\u2018Nelson Mandela is recovering\u2019", "Chinese shares close higher on Tuesday"]}, {"source_sentence": "A woman is reading.", "sentences": ["A woman is writing something.", "A slow loris hanging on a cord.", "The lamb is looking at the camera."]}, {"source_sentence": "A man jumping rope", "sentences": ["A man is climbing a rope.", "Blast on Indian train kills one", "Finance minister promises no new taxes"]}, {"source_sentence": "A woman is dancing.", "sentences": ["A man is dancing.", "A brown horse in a green field.", "Australia cuts rates to record low"]}], "pipeline_tag": "sentence-similarity", "co2_eq_emissions": {"emissions": 0.1439181045681014, "energy_consumed": 0.0003702530590737928, "source": "codecarbon", "training_type": "fine-tuning", "on_cloud": false, "cpu_model": "13th Gen Intel(R) Core(TM) i7-13700K", "ram_total_size": 31.777088165283203, "hours_used": 0.009, "hardware_used": "1 x NVIDIA GeForce RTX 3090"}, "model-index": [{"name": "SentenceTransformer", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts dev", "type": "sts-dev"}, "metrics": [{"type": "pearson_cosine", "value": 0.757199024718024, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.7531549457233511, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.716988424804303, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.7272795203957675, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.71702575877283, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.7268093526359362, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.5785350115318801, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.6221005727058916, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.757199024718024, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.7531549457233511, "name": "Spearman Max"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts test", "type": "sts-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.6689490577594517, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.6405445334782408, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.6176678945140798, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.615214522139229, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.6184837579619497, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.6162673767473799, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.50934636927282, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.5194344025197553, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.6689490577594517, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.6405445334782408, "name": "Spearman Max"}]}]}]}
tomaarsen/glove-wikipedia-tf-idf
null
[ "sentence-transformers", "sentence-similarity", "feature-extraction", "loss:CosineSimilarityLoss", "en", "arxiv:1908.10084", "model-index", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2024-04-25T15:48:11+00:00
[ "1908.10084" ]
[ "en" ]
TAGS #sentence-transformers #sentence-similarity #feature-extraction #loss-CosineSimilarityLoss #en #arxiv-1908.10084 #model-index #co2_eq_emissions #endpoints_compatible #region-us
SentenceTransformer =================== This is a sentence-transformers model trained on the sentence-transformers/stsb dataset. It maps sentences & paragraphs to a 300-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. Model Details ------------- ### Model Description * Model Type: Sentence Transformer * Maximum Sequence Length: 1000000 tokens * Output Dimensionality: 300 tokens * Similarity Function: Cosine Similarity * Training Dataset: + sentence-transformers/stsb * Language: en ### Model Sources * Documentation: Sentence Transformers Documentation * Repository: Sentence Transformers on GitHub * Hugging Face: Sentence Transformers on Hugging Face ### Full Model Architecture Usage ----- ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: Then you can load this model and run inference. Evaluation ---------- ### Metrics #### Semantic Similarity * Dataset: 'sts-dev' * Evaluated with `EmbeddingSimilarityEvaluator` #### Semantic Similarity * Dataset: 'sts-test' * Evaluated with `EmbeddingSimilarityEvaluator` Training Details ---------------- ### Training Dataset #### sentence-transformers/stsb * Dataset: sentence-transformers/stsb at d999f12 * Size: 5,749 training samples * Columns: `sentence1`, `sentence2`, and `score` * Approximate statistics based on the first 1000 samples: * Samples: * Loss: `CosineSimilarityLoss` with these parameters: ### Evaluation Dataset #### sentence-transformers/stsb * Dataset: sentence-transformers/stsb at d999f12 * Size: 1,500 evaluation samples * Columns: `sentence1`, `sentence2`, and `score` * Approximate statistics based on the first 1000 samples: * Samples: * Loss: `CosineSimilarityLoss` with these parameters: ### Training Hyperparameters #### Non-Default Hyperparameters * 'eval\_strategy': steps * 'per\_device\_train\_batch\_size': 32 * 'per\_device\_eval\_batch\_size': 32 * 'num\_train\_epochs': 1 * 'warmup\_ratio': 0.1 * 'fp16': True #### All Hyperparameters Click to expand * 'overwrite\_output\_dir': False * 'do\_predict': False * 'eval\_strategy': steps * 'prediction\_loss\_only': False * 'per\_device\_train\_batch\_size': 32 * 'per\_device\_eval\_batch\_size': 32 * 'per\_gpu\_train\_batch\_size': None * 'per\_gpu\_eval\_batch\_size': None * 'gradient\_accumulation\_steps': 1 * 'eval\_accumulation\_steps': None * 'learning\_rate': 5e-05 * 'weight\_decay': 0.0 * 'adam\_beta1': 0.9 * 'adam\_beta2': 0.999 * 'adam\_epsilon': 1e-08 * 'max\_grad\_norm': 1.0 * 'num\_train\_epochs': 1 * 'max\_steps': -1 * 'lr\_scheduler\_type': linear * 'lr\_scheduler\_kwargs': {} * 'warmup\_ratio': 0.1 * 'warmup\_steps': 0 * 'log\_level': passive * 'log\_level\_replica': warning * 'log\_on\_each\_node': True * 'logging\_nan\_inf\_filter': True * 'save\_safetensors': True * 'save\_on\_each\_node': False * 'save\_only\_model': False * 'no\_cuda': False * 'use\_cpu': False * 'use\_mps\_device': False * 'seed': 42 * 'data\_seed': None * 'jit\_mode\_eval': False * 'use\_ipex': False * 'bf16': False * 'fp16': True * 'fp16\_opt\_level': O1 * 'half\_precision\_backend': auto * 'bf16\_full\_eval': False * 'fp16\_full\_eval': False * 'tf32': None * 'local\_rank': 0 * 'ddp\_backend': None * 'tpu\_num\_cores': None * 'tpu\_metrics\_debug': False * 'debug': [] * 'dataloader\_drop\_last': False * 'dataloader\_num\_workers': 0 * 'dataloader\_prefetch\_factor': None * 'past\_index': -1 * 'disable\_tqdm': False * 'remove\_unused\_columns': True * 'label\_names': None * 'load\_best\_model\_at\_end': False * 'ignore\_data\_skip': False * 'fsdp': [] * 'fsdp\_min\_num\_params': 0 * 'fsdp\_config': {'min\_num\_params': 0, 'xla': False, 'xla\_fsdp\_v2': False, 'xla\_fsdp\_grad\_ckpt': False} * 'fsdp\_transformer\_layer\_cls\_to\_wrap': None * 'accelerator\_config': {'split\_batches': False, 'dispatch\_batches': None, 'even\_batches': True, 'use\_seedable\_sampler': True, 'non\_blocking': False, 'gradient\_accumulation\_kwargs': None} * 'deepspeed': None * 'label\_smoothing\_factor': 0.0 * 'optim': adamw\_torch * 'optim\_args': None * 'adafactor': False * 'group\_by\_length': False * 'length\_column\_name': length * 'ddp\_find\_unused\_parameters': None * 'ddp\_bucket\_cap\_mb': None * 'ddp\_broadcast\_buffers': None * 'dataloader\_pin\_memory': True * 'dataloader\_persistent\_workers': False * 'skip\_memory\_metrics': True * 'use\_legacy\_prediction\_loop': False * 'push\_to\_hub': False * 'resume\_from\_checkpoint': None * 'hub\_model\_id': None * 'hub\_strategy': every\_save * 'hub\_private\_repo': False * 'hub\_always\_push': False * 'gradient\_checkpointing': False * 'gradient\_checkpointing\_kwargs': None * 'include\_inputs\_for\_metrics': False * 'eval\_do\_concat\_batches': True * 'fp16\_backend': auto * 'push\_to\_hub\_model\_id': None * 'push\_to\_hub\_organization': None * 'mp\_parameters': * 'auto\_find\_batch\_size': False * 'full\_determinism': False * 'torchdynamo': None * 'ray\_scope': last * 'ddp\_timeout': 1800 * 'torch\_compile': False * 'torch\_compile\_backend': None * 'torch\_compile\_mode': None * 'dispatch\_batches': None * 'split\_batches': None * 'include\_tokens\_per\_second': False * 'include\_num\_input\_tokens\_seen': False * 'neftune\_noise\_alpha': None * 'optim\_target\_modules': None * 'batch\_sampler': batch\_sampler * 'multi\_dataset\_batch\_sampler': proportional ### Training Logs ### Environmental Impact Carbon emissions were measured using CodeCarbon. * Energy Consumed: 0.000 kWh * Carbon Emitted: 0.000 kg of CO2 * Hours Used: 0.009 hours ### Training Hardware * On Cloud: No * GPU Model: 1 x NVIDIA GeForce RTX 3090 * CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K * RAM Size: 31.78 GB ### Framework Versions * Python: 3.11.6 * Sentence Transformers: 3.0.0.dev0 * Transformers: 4.41.0.dev0 * PyTorch: 2.3.0+cu121 * Accelerate: 0.26.1 * Datasets: 2.18.0 * Tokenizers: 0.19.1 ### BibTeX #### Sentence Transformers
[ "### Model Description\n\n\n* Model Type: Sentence Transformer\n* Maximum Sequence Length: 1000000 tokens\n* Output Dimensionality: 300 tokens\n* Similarity Function: Cosine Similarity\n* Training Dataset:\n\n\n\t+ sentence-transformers/stsb\n* Language: en", "### Model Sources\n\n\n* Documentation: Sentence Transformers Documentation\n* Repository: Sentence Transformers on GitHub\n* Hugging Face: Sentence Transformers on Hugging Face", "### Full Model Architecture\n\n\nUsage\n-----", "### Direct Usage (Sentence Transformers)\n\n\nFirst install the Sentence Transformers library:\n\n\nThen you can load this model and run inference.\n\n\nEvaluation\n----------", "### Metrics", "#### Semantic Similarity\n\n\n* Dataset: 'sts-dev'\n* Evaluated with `EmbeddingSimilarityEvaluator`", "#### Semantic Similarity\n\n\n* Dataset: 'sts-test'\n* Evaluated with `EmbeddingSimilarityEvaluator`\n\n\n\nTraining Details\n----------------", "### Training Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 5,749 training samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Evaluation Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 1,500 evaluation samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Training Hyperparameters", "#### Non-Default Hyperparameters\n\n\n* 'eval\\_strategy': steps\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'num\\_train\\_epochs': 1\n* 'warmup\\_ratio': 0.1\n* 'fp16': True", "#### All Hyperparameters\n\n\nClick to expand\n* 'overwrite\\_output\\_dir': False\n* 'do\\_predict': False\n* 'eval\\_strategy': steps\n* 'prediction\\_loss\\_only': False\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'per\\_gpu\\_train\\_batch\\_size': None\n* 'per\\_gpu\\_eval\\_batch\\_size': None\n* 'gradient\\_accumulation\\_steps': 1\n* 'eval\\_accumulation\\_steps': None\n* 'learning\\_rate': 5e-05\n* 'weight\\_decay': 0.0\n* 'adam\\_beta1': 0.9\n* 'adam\\_beta2': 0.999\n* 'adam\\_epsilon': 1e-08\n* 'max\\_grad\\_norm': 1.0\n* 'num\\_train\\_epochs': 1\n* 'max\\_steps': -1\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_kwargs': {}\n* 'warmup\\_ratio': 0.1\n* 'warmup\\_steps': 0\n* 'log\\_level': passive\n* 'log\\_level\\_replica': warning\n* 'log\\_on\\_each\\_node': True\n* 'logging\\_nan\\_inf\\_filter': True\n* 'save\\_safetensors': True\n* 'save\\_on\\_each\\_node': False\n* 'save\\_only\\_model': False\n* 'no\\_cuda': False\n* 'use\\_cpu': False\n* 'use\\_mps\\_device': False\n* 'seed': 42\n* 'data\\_seed': None\n* 'jit\\_mode\\_eval': False\n* 'use\\_ipex': False\n* 'bf16': False\n* 'fp16': True\n* 'fp16\\_opt\\_level': O1\n* 'half\\_precision\\_backend': auto\n* 'bf16\\_full\\_eval': False\n* 'fp16\\_full\\_eval': False\n* 'tf32': None\n* 'local\\_rank': 0\n* 'ddp\\_backend': None\n* 'tpu\\_num\\_cores': None\n* 'tpu\\_metrics\\_debug': False\n* 'debug': []\n* 'dataloader\\_drop\\_last': False\n* 'dataloader\\_num\\_workers': 0\n* 'dataloader\\_prefetch\\_factor': None\n* 'past\\_index': -1\n* 'disable\\_tqdm': False\n* 'remove\\_unused\\_columns': True\n* 'label\\_names': None\n* 'load\\_best\\_model\\_at\\_end': False\n* 'ignore\\_data\\_skip': False\n* 'fsdp': []\n* 'fsdp\\_min\\_num\\_params': 0\n* 'fsdp\\_config': {'min\\_num\\_params': 0, 'xla': False, 'xla\\_fsdp\\_v2': False, 'xla\\_fsdp\\_grad\\_ckpt': False}\n* 'fsdp\\_transformer\\_layer\\_cls\\_to\\_wrap': None\n* 'accelerator\\_config': {'split\\_batches': False, 'dispatch\\_batches': None, 'even\\_batches': True, 'use\\_seedable\\_sampler': True, 'non\\_blocking': False, 'gradient\\_accumulation\\_kwargs': None}\n* 'deepspeed': None\n* 'label\\_smoothing\\_factor': 0.0\n* 'optim': adamw\\_torch\n* 'optim\\_args': None\n* 'adafactor': False\n* 'group\\_by\\_length': False\n* 'length\\_column\\_name': length\n* 'ddp\\_find\\_unused\\_parameters': None\n* 'ddp\\_bucket\\_cap\\_mb': None\n* 'ddp\\_broadcast\\_buffers': None\n* 'dataloader\\_pin\\_memory': True\n* 'dataloader\\_persistent\\_workers': False\n* 'skip\\_memory\\_metrics': True\n* 'use\\_legacy\\_prediction\\_loop': False\n* 'push\\_to\\_hub': False\n* 'resume\\_from\\_checkpoint': None\n* 'hub\\_model\\_id': None\n* 'hub\\_strategy': every\\_save\n* 'hub\\_private\\_repo': False\n* 'hub\\_always\\_push': False\n* 'gradient\\_checkpointing': False\n* 'gradient\\_checkpointing\\_kwargs': None\n* 'include\\_inputs\\_for\\_metrics': False\n* 'eval\\_do\\_concat\\_batches': True\n* 'fp16\\_backend': auto\n* 'push\\_to\\_hub\\_model\\_id': None\n* 'push\\_to\\_hub\\_organization': None\n* 'mp\\_parameters':\n* 'auto\\_find\\_batch\\_size': False\n* 'full\\_determinism': False\n* 'torchdynamo': None\n* 'ray\\_scope': last\n* 'ddp\\_timeout': 1800\n* 'torch\\_compile': False\n* 'torch\\_compile\\_backend': None\n* 'torch\\_compile\\_mode': None\n* 'dispatch\\_batches': None\n* 'split\\_batches': None\n* 'include\\_tokens\\_per\\_second': False\n* 'include\\_num\\_input\\_tokens\\_seen': False\n* 'neftune\\_noise\\_alpha': None\n* 'optim\\_target\\_modules': None\n* 'batch\\_sampler': batch\\_sampler\n* 'multi\\_dataset\\_batch\\_sampler': proportional", "### Training Logs", "### Environmental Impact\n\n\nCarbon emissions were measured using CodeCarbon.\n\n\n* Energy Consumed: 0.000 kWh\n* Carbon Emitted: 0.000 kg of CO2\n* Hours Used: 0.009 hours", "### Training Hardware\n\n\n* On Cloud: No\n* GPU Model: 1 x NVIDIA GeForce RTX 3090\n* CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K\n* RAM Size: 31.78 GB", "### Framework Versions\n\n\n* Python: 3.11.6\n* Sentence Transformers: 3.0.0.dev0\n* Transformers: 4.41.0.dev0\n* PyTorch: 2.3.0+cu121\n* Accelerate: 0.26.1\n* Datasets: 2.18.0\n* Tokenizers: 0.19.1", "### BibTeX", "#### Sentence Transformers" ]
[ "TAGS\n#sentence-transformers #sentence-similarity #feature-extraction #loss-CosineSimilarityLoss #en #arxiv-1908.10084 #model-index #co2_eq_emissions #endpoints_compatible #region-us \n", "### Model Description\n\n\n* Model Type: Sentence Transformer\n* Maximum Sequence Length: 1000000 tokens\n* Output Dimensionality: 300 tokens\n* Similarity Function: Cosine Similarity\n* Training Dataset:\n\n\n\t+ sentence-transformers/stsb\n* Language: en", "### Model Sources\n\n\n* Documentation: Sentence Transformers Documentation\n* Repository: Sentence Transformers on GitHub\n* Hugging Face: Sentence Transformers on Hugging Face", "### Full Model Architecture\n\n\nUsage\n-----", "### Direct Usage (Sentence Transformers)\n\n\nFirst install the Sentence Transformers library:\n\n\nThen you can load this model and run inference.\n\n\nEvaluation\n----------", "### Metrics", "#### Semantic Similarity\n\n\n* Dataset: 'sts-dev'\n* Evaluated with `EmbeddingSimilarityEvaluator`", "#### Semantic Similarity\n\n\n* Dataset: 'sts-test'\n* Evaluated with `EmbeddingSimilarityEvaluator`\n\n\n\nTraining Details\n----------------", "### Training Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 5,749 training samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Evaluation Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 1,500 evaluation samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Training Hyperparameters", "#### Non-Default Hyperparameters\n\n\n* 'eval\\_strategy': steps\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'num\\_train\\_epochs': 1\n* 'warmup\\_ratio': 0.1\n* 'fp16': True", "#### All Hyperparameters\n\n\nClick to expand\n* 'overwrite\\_output\\_dir': False\n* 'do\\_predict': False\n* 'eval\\_strategy': steps\n* 'prediction\\_loss\\_only': False\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'per\\_gpu\\_train\\_batch\\_size': None\n* 'per\\_gpu\\_eval\\_batch\\_size': None\n* 'gradient\\_accumulation\\_steps': 1\n* 'eval\\_accumulation\\_steps': None\n* 'learning\\_rate': 5e-05\n* 'weight\\_decay': 0.0\n* 'adam\\_beta1': 0.9\n* 'adam\\_beta2': 0.999\n* 'adam\\_epsilon': 1e-08\n* 'max\\_grad\\_norm': 1.0\n* 'num\\_train\\_epochs': 1\n* 'max\\_steps': -1\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_kwargs': {}\n* 'warmup\\_ratio': 0.1\n* 'warmup\\_steps': 0\n* 'log\\_level': passive\n* 'log\\_level\\_replica': warning\n* 'log\\_on\\_each\\_node': True\n* 'logging\\_nan\\_inf\\_filter': True\n* 'save\\_safetensors': True\n* 'save\\_on\\_each\\_node': False\n* 'save\\_only\\_model': False\n* 'no\\_cuda': False\n* 'use\\_cpu': False\n* 'use\\_mps\\_device': False\n* 'seed': 42\n* 'data\\_seed': None\n* 'jit\\_mode\\_eval': False\n* 'use\\_ipex': False\n* 'bf16': False\n* 'fp16': True\n* 'fp16\\_opt\\_level': O1\n* 'half\\_precision\\_backend': auto\n* 'bf16\\_full\\_eval': False\n* 'fp16\\_full\\_eval': False\n* 'tf32': None\n* 'local\\_rank': 0\n* 'ddp\\_backend': None\n* 'tpu\\_num\\_cores': None\n* 'tpu\\_metrics\\_debug': False\n* 'debug': []\n* 'dataloader\\_drop\\_last': False\n* 'dataloader\\_num\\_workers': 0\n* 'dataloader\\_prefetch\\_factor': None\n* 'past\\_index': -1\n* 'disable\\_tqdm': False\n* 'remove\\_unused\\_columns': True\n* 'label\\_names': None\n* 'load\\_best\\_model\\_at\\_end': False\n* 'ignore\\_data\\_skip': False\n* 'fsdp': []\n* 'fsdp\\_min\\_num\\_params': 0\n* 'fsdp\\_config': {'min\\_num\\_params': 0, 'xla': False, 'xla\\_fsdp\\_v2': False, 'xla\\_fsdp\\_grad\\_ckpt': False}\n* 'fsdp\\_transformer\\_layer\\_cls\\_to\\_wrap': None\n* 'accelerator\\_config': {'split\\_batches': False, 'dispatch\\_batches': None, 'even\\_batches': True, 'use\\_seedable\\_sampler': True, 'non\\_blocking': False, 'gradient\\_accumulation\\_kwargs': None}\n* 'deepspeed': None\n* 'label\\_smoothing\\_factor': 0.0\n* 'optim': adamw\\_torch\n* 'optim\\_args': None\n* 'adafactor': False\n* 'group\\_by\\_length': False\n* 'length\\_column\\_name': length\n* 'ddp\\_find\\_unused\\_parameters': None\n* 'ddp\\_bucket\\_cap\\_mb': None\n* 'ddp\\_broadcast\\_buffers': None\n* 'dataloader\\_pin\\_memory': True\n* 'dataloader\\_persistent\\_workers': False\n* 'skip\\_memory\\_metrics': True\n* 'use\\_legacy\\_prediction\\_loop': False\n* 'push\\_to\\_hub': False\n* 'resume\\_from\\_checkpoint': None\n* 'hub\\_model\\_id': None\n* 'hub\\_strategy': every\\_save\n* 'hub\\_private\\_repo': False\n* 'hub\\_always\\_push': False\n* 'gradient\\_checkpointing': False\n* 'gradient\\_checkpointing\\_kwargs': None\n* 'include\\_inputs\\_for\\_metrics': False\n* 'eval\\_do\\_concat\\_batches': True\n* 'fp16\\_backend': auto\n* 'push\\_to\\_hub\\_model\\_id': None\n* 'push\\_to\\_hub\\_organization': None\n* 'mp\\_parameters':\n* 'auto\\_find\\_batch\\_size': False\n* 'full\\_determinism': False\n* 'torchdynamo': None\n* 'ray\\_scope': last\n* 'ddp\\_timeout': 1800\n* 'torch\\_compile': False\n* 'torch\\_compile\\_backend': None\n* 'torch\\_compile\\_mode': None\n* 'dispatch\\_batches': None\n* 'split\\_batches': None\n* 'include\\_tokens\\_per\\_second': False\n* 'include\\_num\\_input\\_tokens\\_seen': False\n* 'neftune\\_noise\\_alpha': None\n* 'optim\\_target\\_modules': None\n* 'batch\\_sampler': batch\\_sampler\n* 'multi\\_dataset\\_batch\\_sampler': proportional", "### Training Logs", "### Environmental Impact\n\n\nCarbon emissions were measured using CodeCarbon.\n\n\n* Energy Consumed: 0.000 kWh\n* Carbon Emitted: 0.000 kg of CO2\n* Hours Used: 0.009 hours", "### Training Hardware\n\n\n* On Cloud: No\n* GPU Model: 1 x NVIDIA GeForce RTX 3090\n* CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K\n* RAM Size: 31.78 GB", "### Framework Versions\n\n\n* Python: 3.11.6\n* Sentence Transformers: 3.0.0.dev0\n* Transformers: 4.41.0.dev0\n* PyTorch: 2.3.0+cu121\n* Accelerate: 0.26.1\n* Datasets: 2.18.0\n* Tokenizers: 0.19.1", "### BibTeX", "#### Sentence Transformers" ]
object-detection
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9698 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.237 | 0.2 | 500 | 1.0990 | | 1.0403 | 0.4 | 1000 | 0.9698 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/detr-resnet-50", "model-index": [{"name": "detr", "results": []}]}
YaroslavPrytula/detr
null
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T15:51:20+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us
detr ==== This model is a fine-tuned version of facebook/detr-resnet-50 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.9698 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 1000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 1000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 1000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Kiran2004/Electra_QCA_Squad This model is a fine-tuned version of [deepset/electra-base-squad2](https://huggingface.co/deepset/electra-base-squad2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0515 - Validation Loss: 0.1711 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.3443 | 0.2038 | 0 | | 0.2506 | 0.1911 | 1 | | 0.1457 | 0.1931 | 2 | | 0.1177 | 0.1815 | 3 | | 0.1026 | 0.1794 | 4 | | 0.0772 | 0.1669 | 5 | | 0.0716 | 0.1754 | 6 | | 0.0601 | 0.1712 | 7 | | 0.0484 | 0.1721 | 8 | | 0.0515 | 0.1711 | 9 | ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "cc-by-4.0", "tags": ["generated_from_keras_callback"], "base_model": "deepset/electra-base-squad2", "model-index": [{"name": "Kiran2004/Electra_QCA_Squad", "results": []}]}
Kiran2004/Electra_QCA_Squad
null
[ "transformers", "tf", "electra", "question-answering", "generated_from_keras_callback", "base_model:deepset/electra-base-squad2", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T15:53:13+00:00
[]
[]
TAGS #transformers #tf #electra #question-answering #generated_from_keras_callback #base_model-deepset/electra-base-squad2 #license-cc-by-4.0 #endpoints_compatible #region-us
Kiran2004/Electra\_QCA\_Squad ============================= This model is a fine-tuned version of deepset/electra-base-squad2 on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.0515 * Validation Loss: 0.1711 * Epoch: 9 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 500, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.40.0 * TensorFlow 2.15.0 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 500, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tf #electra #question-answering #generated_from_keras_callback #base_model-deepset/electra-base-squad2 #license-cc-by-4.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 500, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Karimsliti/karim_codellama_merged
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T15:53:44+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multilingual-e5-large-guardrail-financial-advice-classifier-training This model is a fine-tuned version of [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "intfloat/multilingual-e5-large", "model-index": [{"name": "multilingual-e5-large-guardrail-financial-advice-classifier-training", "results": []}]}
jamesoneill12/multilingual-e5-large-guardrail-financial-advice-classifier-training
null
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:intfloat/multilingual-e5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T15:54:05+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #text-classification #generated_from_trainer #base_model-intfloat/multilingual-e5-large #license-mit #autotrain_compatible #endpoints_compatible #region-us
# multilingual-e5-large-guardrail-financial-advice-classifier-training This model is a fine-tuned version of intfloat/multilingual-e5-large on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
[ "# multilingual-e5-large-guardrail-financial-advice-classifier-training\n\nThis model is a fine-tuned version of intfloat/multilingual-e5-large on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-06\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 6", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #text-classification #generated_from_trainer #base_model-intfloat/multilingual-e5-large #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# multilingual-e5-large-guardrail-financial-advice-classifier-training\n\nThis model is a fine-tuned version of intfloat/multilingual-e5-large on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-06\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 6", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/chrischain/Satoshi1337-8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Satoshi1337-8B-GGUF/resolve/main/Satoshi1337-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "base_model": "chrischain/Satoshi1337-8B", "extra_gated_button_content": "Submit", "extra_gated_fields": {"Affiliation": "text", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox", "Country": "country", "Date of birth": "date_picker", "First Name": "text", "Last Name": "text", "geo": "ip_location"}, "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "license_link": "LICENSE", "license_name": "llama3", "quantized_by": "mradermacher"}
mradermacher/Satoshi1337-8B-GGUF
null
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "en", "base_model:chrischain/Satoshi1337-8B", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-25T15:55:15+00:00
[]
[ "en" ]
TAGS #transformers #gguf #facebook #meta #pytorch #llama #llama-3 #en #base_model-chrischain/Satoshi1337-8B #license-other #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #facebook #meta #pytorch #llama #llama-3 #en #base_model-chrischain/Satoshi1337-8B #license-other #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
CMU-AIR2/math-deepseek-FULL-ArithHardC12
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T15:57:00+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
sin2piusc/whisper-medium-anime-5k-tokenizer
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T15:57:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Qwen-Audio-nf4 This is the quantized version of [Qwen-Audio](https://huggingface.co/Qwen/Qwen-Audio) # Original Model Card: # Qwen-Audio <br> <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Audio/audio_logo.jpg" width="400"/> <p> <br> <p align="center"> Qwen-Audio <a href="https://www.modelscope.cn/models/qwen/QWen-Audio/summary">🤖 <a> | <a href="https://huggingface.co/Qwen/Qwen-Audio">🤗</a>&nbsp | Qwen-Audio-Chat <a href="https://www.modelscope.cn/models/qwen/QWen-Audio-Chat/summary">🤖 <a>| <a href="https://huggingface.co/Qwen/Qwen-Audio-Chat">🤗</a>&nbsp | &nbsp&nbsp Demo<a href="https://modelscope.cn/studios/qwen/Qwen-Audio-Chat-Demo/summary"> 🤖</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen-Audio">🤗</a>&nbsp <br> &nbsp&nbsp<a href="https://qwen-audio.github.io/Qwen-Audio/">Homepage</a>&nbsp | &nbsp<a href="http://arxiv.org/abs/2311.07919">Paper</a> | &nbsp<a href="https://huggingface.co/papers/2311.07919">🤗</a> </p> <br><br> **Qwen-Audio** (Qwen Large Audio Language Model) is the multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-Audio accepts diverse audio (human speech, natural sound, music and song) and text as inputs, outputs text. The contribution of Qwen-Audio include: - **Fundamental audio models**: Qwen-Audio is a fundamental multi-task audio-language model that supports various tasks, languages, and audio types, serving as a universal audio understanding model. Building upon Qwen-Audio, we develop Qwen-Audio-Chat through instruction fine-tuning, enabling multi-turn dialogues and supporting diverse audio-oriented scenarios. - **Multi-task learning framework for all types of audios**: To scale up audio-language pre-training, we address the challenge of variation in textual labels associated with different datasets by proposing a multi-task training framework, enabling knowledge sharing and avoiding one-to-many interference. Our model incorporates more than 30 tasks and extensive experiments show the model achieves strong performance. - **Strong Performance**: Experimental results show that Qwen-Audio achieves impressive performance across diverse benchmark tasks without requiring any task-specific fine-tuning, surpassing its counterparts. Specifically, Qwen-Audio achieves state-of-the-art results on the test set of Aishell1, cochlscene, ClothoAQA, and VocalSound. - **Flexible multi-run chat from audio and text input**: Qwen-Audio supports multiple-audio analysis, sound understading and reasoning, music appreciation, and tool usage for speech editing. **Qwen-Audio** 是阿里云研发的大规模音频语言模型(Large Audio Language Model)。Qwen-Audio 可以以多种音频 (包括说话人语音、自然音、音乐、歌声)和文本作为输入,并以文本作为输出。Qwen-Audio 系列模型的特点包括: - **音频基石模型**:Qwen-Audio是一个性能卓越的通用的音频理解模型,支持各种任务、语言和音频类型。在Qwen-Audio的基础上,我们通过指令微调开发了Qwen-Audio-Chat,支持多轮、多语言、多语言对话。Qwen-Audio和Qwen-Audio-Chat模型均已开源。 - **兼容多种复杂音频的多任务学习框架**:为了避免由于数据收集来源不同以及任务类型不同,带来的音频到文本的一对多的干扰问题,我们提出了一种多任务训练框架,实现相似任务的知识共享,并尽可能减少不同任务之间的干扰。通过提出的框架,Qwen-Audio可以容纳训练超过30多种不同的音频任务; - **出色的性能**:Qwen-Audio在不需要任何任务特定的微调的情况下,在各种基准任务上取得了领先的结果。具体得,Qwen-Audio在Aishell1、cochlscene、ClothoAQA和VocalSound的测试集上都达到了SOTA; - **支持多轮音频和文本对话,支持各种语音场景**:Qwen-Audio-Chat支持声音理解和推理、音乐欣赏、多音频分析、多轮音频-文本交错对话以及外部语音工具的使用(如语音编辑)。 We release Qwen-Audio and Qwen-Audio-Chat, which are pretrained model and Chat model respectively. For more details about Qwen-Audio, please refer to our [Github Repo](https://github.com/QwenLM/Qwen-Audio/tree/main). This repo is the one for Qwen-Audio. <br> 目前,我们提供了Qwen-Audio和Qwen-Audio-Chat两个模型,分别为预训练模型和Chat模型。如果想了解更多关于信息,请点击[链接](https://github.com/QwenLM/Qwen-Audio/tree/main)查看Github仓库。本仓库为Qwen-Audio仓库。 ## Requirements * python 3.8 and above * pytorch 1.12 and above, 2.0 and above are recommended * CUDA 11.4 and above are recommended (this is for GPU users) * FFmpeg <br> ## Quickstart Below, we provide simple examples to show how to use Qwen-Audio with 🤗 Transformers. Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries. ```bash pip install -r requirements.txt ``` For more details, please refer to [tutorial](https://github.com/QwenLM/Qwen-Audio). #### 🤗 Transformers To use Qwen-Audio for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, **please make sure that you are using the latest code.** ```python from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation import GenerationConfig import torch torch.manual_seed(1234) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-Audio", trust_remote_code=True) # use bf16 # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-Audio", device_map="auto", trust_remote_code=True, bf16=True).eval() # use fp16 # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-Audio", device_map="auto", trust_remote_code=True, fp16=True).eval() # use cpu only # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-Audio", device_map="cpu", trust_remote_code=True).eval() # use cuda device model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-Audio", device_map="cuda", trust_remote_code=True).eval() # Specify hyperparameters for generation (No need to do this if you are using transformers>4.32.0) # model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-Audio", trust_remote_code=True) audio_url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Audio/1272-128104-0000.flac" sp_prompt = "<|startoftranscript|><|en|><|transcribe|><|en|><|notimestamps|><|wo_itn|>" query = f"<audio>{audio_url}</audio>{sp_prompt}" audio_info = tokenizer.process_audio(query) inputs = tokenizer(query, return_tensors='pt', audio_info=audio_info) inputs = inputs.to(model.device) pred = model.generate(**inputs, audio_info=audio_info) response = tokenizer.decode(pred.cpu()[0], skip_special_tokens=False,audio_info=audio_info) print(response) # <audio>https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Audio/1272-128104-0000.flac</audio><|startoftranscription|><|en|><|transcribe|><|en|><|notimestamps|><|wo_itn|>mister quilting is the apostle of the middle classes and we are glad to welcome his gospel<|endoftext|> ``` ## License Agreement Researchers and developers are free to use the codes and model weights of Qwen-Audio. We also allow its commercial use. Check our license at [LICENSE](https://github.com/QwenLM/Qwen-Audio/blob/main/LICENSE.txt) for more details. <br> ## Citation If you find our paper and code useful in your research, please consider giving a star and citation ```BibTeX @article{Qwen-Audio, title={Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language Models}, author={Chu, Yunfei and Xu, Jin and Zhou, Xiaohuan and Yang, Qian and Zhang, Shiliang and Yan, Zhijie and Zhou, Chang and Zhou, Jingren}, journal={arXiv preprint arXiv:2311.07919}, year={2023} } ``` <br> ## Contact Us If you are interested to leave a message to either our research team or product team, feel free to send an email to [email protected].
{"language": ["zh", "en"], "tags": ["qwen"], "pipeline_tag": "text-generation", "inference": false}
Ostixe360/Qwen-Audio-nf4
null
[ "transformers", "safetensors", "qwen", "text-generation", "custom_code", "zh", "en", "arxiv:2311.07919", "autotrain_compatible", "4-bit", "region:us" ]
null
2024-04-25T16:03:06+00:00
[ "2311.07919" ]
[ "zh", "en" ]
TAGS #transformers #safetensors #qwen #text-generation #custom_code #zh #en #arxiv-2311.07919 #autotrain_compatible #4-bit #region-us
# Qwen-Audio-nf4 This is the quantized version of Qwen-Audio # Original Model Card: # Qwen-Audio <br> <p align="center"> <img src="URL width="400"/> <p> <br> <p align="center"> Qwen-Audio <a href="URL <a> | <a href="URL | Qwen-Audio-Chat <a href="URL <a>| <a href="URL | &nbsp&nbsp Demo<a href="URL </a> | <a href="URL <br> &nbsp&nbsp<a href="URL | &nbsp<a href="URL | &nbsp<a href="URL </p> <br><br> Qwen-Audio (Qwen Large Audio Language Model) is the multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-Audio accepts diverse audio (human speech, natural sound, music and song) and text as inputs, outputs text. The contribution of Qwen-Audio include: - Fundamental audio models: Qwen-Audio is a fundamental multi-task audio-language model that supports various tasks, languages, and audio types, serving as a universal audio understanding model. Building upon Qwen-Audio, we develop Qwen-Audio-Chat through instruction fine-tuning, enabling multi-turn dialogues and supporting diverse audio-oriented scenarios. - Multi-task learning framework for all types of audios: To scale up audio-language pre-training, we address the challenge of variation in textual labels associated with different datasets by proposing a multi-task training framework, enabling knowledge sharing and avoiding one-to-many interference. Our model incorporates more than 30 tasks and extensive experiments show the model achieves strong performance. - Strong Performance: Experimental results show that Qwen-Audio achieves impressive performance across diverse benchmark tasks without requiring any task-specific fine-tuning, surpassing its counterparts. Specifically, Qwen-Audio achieves state-of-the-art results on the test set of Aishell1, cochlscene, ClothoAQA, and VocalSound. - Flexible multi-run chat from audio and text input: Qwen-Audio supports multiple-audio analysis, sound understading and reasoning, music appreciation, and tool usage for speech editing. Qwen-Audio 是阿里云研发的大规模音频语言模型(Large Audio Language Model)。Qwen-Audio 可以以多种音频 (包括说话人语音、自然音、音乐、歌声)和文本作为输入,并以文本作为输出。Qwen-Audio 系列模型的特点包括: - 音频基石模型:Qwen-Audio是一个性能卓越的通用的音频理解模型,支持各种任务、语言和音频类型。在Qwen-Audio的基础上,我们通过指令微调开发了Qwen-Audio-Chat,支持多轮、多语言、多语言对话。Qwen-Audio和Qwen-Audio-Chat模型均已开源。 - 兼容多种复杂音频的多任务学习框架:为了避免由于数据收集来源不同以及任务类型不同,带来的音频到文本的一对多的干扰问题,我们提出了一种多任务训练框架,实现相似任务的知识共享,并尽可能减少不同任务之间的干扰。通过提出的框架,Qwen-Audio可以容纳训练超过30多种不同的音频任务; - 出色的性能:Qwen-Audio在不需要任何任务特定的微调的情况下,在各种基准任务上取得了领先的结果。具体得,Qwen-Audio在Aishell1、cochlscene、ClothoAQA和VocalSound的测试集上都达到了SOTA; - 支持多轮音频和文本对话,支持各种语音场景:Qwen-Audio-Chat支持声音理解和推理、音乐欣赏、多音频分析、多轮音频-文本交错对话以及外部语音工具的使用(如语音编辑)。 We release Qwen-Audio and Qwen-Audio-Chat, which are pretrained model and Chat model respectively. For more details about Qwen-Audio, please refer to our Github Repo. This repo is the one for Qwen-Audio. <br> 目前,我们提供了Qwen-Audio和Qwen-Audio-Chat两个模型,分别为预训练模型和Chat模型。如果想了解更多关于信息,请点击链接查看Github仓库。本仓库为Qwen-Audio仓库。 ## Requirements * python 3.8 and above * pytorch 1.12 and above, 2.0 and above are recommended * CUDA 11.4 and above are recommended (this is for GPU users) * FFmpeg <br> ## Quickstart Below, we provide simple examples to show how to use Qwen-Audio with Transformers. Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries. For more details, please refer to tutorial. #### Transformers To use Qwen-Audio for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, please make sure that you are using the latest code. ## License Agreement Researchers and developers are free to use the codes and model weights of Qwen-Audio. We also allow its commercial use. Check our license at LICENSE for more details. <br> If you find our paper and code useful in your research, please consider giving a star and citation <br> ## Contact Us If you are interested to leave a message to either our research team or product team, feel free to send an email to qianwen_opensource@URL.
[ "# Qwen-Audio-nf4\n\nThis is the quantized version of Qwen-Audio", "# Original Model Card:", "# Qwen-Audio\n\n<br>\n\n<p align=\"center\">\n <img src=\"URL width=\"400\"/>\n<p>\n<br>\n\n<p align=\"center\">\n Qwen-Audio <a href=\"URL <a> | <a href=\"URL | Qwen-Audio-Chat <a href=\"URL <a>| <a href=\"URL | &nbsp&nbsp Demo<a href=\"URL </a> | <a href=\"URL\n<br>\n&nbsp&nbsp<a href=\"URL | &nbsp<a href=\"URL | &nbsp<a href=\"URL \n</p>\n<br><br>\n\nQwen-Audio (Qwen Large Audio Language Model) is the multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-Audio accepts diverse audio (human speech, natural sound, music and song) and text as inputs, outputs text. The contribution of Qwen-Audio include:\n\n- Fundamental audio models: Qwen-Audio is a fundamental multi-task audio-language model that supports various tasks, languages, and audio types, serving as a universal audio understanding model. Building upon Qwen-Audio, we develop Qwen-Audio-Chat through instruction fine-tuning, enabling multi-turn dialogues and supporting diverse audio-oriented scenarios.\n- Multi-task learning framework for all types of audios: To scale up audio-language pre-training, we address the challenge of variation in textual labels associated with different datasets by proposing a multi-task training framework, enabling knowledge sharing and avoiding one-to-many interference. Our model incorporates more than 30 tasks and extensive experiments show the model achieves strong performance.\n- Strong Performance: Experimental results show that Qwen-Audio achieves impressive performance across diverse benchmark tasks without requiring any task-specific fine-tuning, surpassing its counterparts. Specifically, Qwen-Audio achieves state-of-the-art results on the test set of Aishell1, cochlscene, ClothoAQA, and VocalSound.\n- Flexible multi-run chat from audio and text input: Qwen-Audio supports multiple-audio analysis, sound understading and reasoning, music appreciation, and tool usage for speech editing.\n\nQwen-Audio 是阿里云研发的大规模音频语言模型(Large Audio Language Model)。Qwen-Audio 可以以多种音频 (包括说话人语音、自然音、音乐、歌声)和文本作为输入,并以文本作为输出。Qwen-Audio 系列模型的特点包括:\n\n- 音频基石模型:Qwen-Audio是一个性能卓越的通用的音频理解模型,支持各种任务、语言和音频类型。在Qwen-Audio的基础上,我们通过指令微调开发了Qwen-Audio-Chat,支持多轮、多语言、多语言对话。Qwen-Audio和Qwen-Audio-Chat模型均已开源。\n- 兼容多种复杂音频的多任务学习框架:为了避免由于数据收集来源不同以及任务类型不同,带来的音频到文本的一对多的干扰问题,我们提出了一种多任务训练框架,实现相似任务的知识共享,并尽可能减少不同任务之间的干扰。通过提出的框架,Qwen-Audio可以容纳训练超过30多种不同的音频任务;\n- 出色的性能:Qwen-Audio在不需要任何任务特定的微调的情况下,在各种基准任务上取得了领先的结果。具体得,Qwen-Audio在Aishell1、cochlscene、ClothoAQA和VocalSound的测试集上都达到了SOTA;\n- 支持多轮音频和文本对话,支持各种语音场景:Qwen-Audio-Chat支持声音理解和推理、音乐欣赏、多音频分析、多轮音频-文本交错对话以及外部语音工具的使用(如语音编辑)。\n\n\nWe release Qwen-Audio and Qwen-Audio-Chat, which are pretrained model and Chat model respectively. For more details about Qwen-Audio, please refer to our Github Repo. This repo is the one for Qwen-Audio.\n<br>\n\n目前,我们提供了Qwen-Audio和Qwen-Audio-Chat两个模型,分别为预训练模型和Chat模型。如果想了解更多关于信息,请点击链接查看Github仓库。本仓库为Qwen-Audio仓库。", "## Requirements\n* python 3.8 and above\n* pytorch 1.12 and above, 2.0 and above are recommended\n* CUDA 11.4 and above are recommended (this is for GPU users)\n* FFmpeg\n <br>", "## Quickstart\nBelow, we provide simple examples to show how to use Qwen-Audio with Transformers.\n\nBefore running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.\n\n\nFor more details, please refer to tutorial.", "#### Transformers\n\nTo use Qwen-Audio for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, please make sure that you are using the latest code.", "## License Agreement\nResearchers and developers are free to use the codes and model weights of Qwen-Audio. We also allow its commercial use. Check our license at LICENSE for more details.\n<br>\n\nIf you find our paper and code useful in your research, please consider giving a star and citation\n\n\n<br>", "## Contact Us\n\nIf you are interested to leave a message to either our research team or product team, feel free to send an email to qianwen_opensource@URL." ]
[ "TAGS\n#transformers #safetensors #qwen #text-generation #custom_code #zh #en #arxiv-2311.07919 #autotrain_compatible #4-bit #region-us \n", "# Qwen-Audio-nf4\n\nThis is the quantized version of Qwen-Audio", "# Original Model Card:", "# Qwen-Audio\n\n<br>\n\n<p align=\"center\">\n <img src=\"URL width=\"400\"/>\n<p>\n<br>\n\n<p align=\"center\">\n Qwen-Audio <a href=\"URL <a> | <a href=\"URL | Qwen-Audio-Chat <a href=\"URL <a>| <a href=\"URL | &nbsp&nbsp Demo<a href=\"URL </a> | <a href=\"URL\n<br>\n&nbsp&nbsp<a href=\"URL | &nbsp<a href=\"URL | &nbsp<a href=\"URL \n</p>\n<br><br>\n\nQwen-Audio (Qwen Large Audio Language Model) is the multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-Audio accepts diverse audio (human speech, natural sound, music and song) and text as inputs, outputs text. The contribution of Qwen-Audio include:\n\n- Fundamental audio models: Qwen-Audio is a fundamental multi-task audio-language model that supports various tasks, languages, and audio types, serving as a universal audio understanding model. Building upon Qwen-Audio, we develop Qwen-Audio-Chat through instruction fine-tuning, enabling multi-turn dialogues and supporting diverse audio-oriented scenarios.\n- Multi-task learning framework for all types of audios: To scale up audio-language pre-training, we address the challenge of variation in textual labels associated with different datasets by proposing a multi-task training framework, enabling knowledge sharing and avoiding one-to-many interference. Our model incorporates more than 30 tasks and extensive experiments show the model achieves strong performance.\n- Strong Performance: Experimental results show that Qwen-Audio achieves impressive performance across diverse benchmark tasks without requiring any task-specific fine-tuning, surpassing its counterparts. Specifically, Qwen-Audio achieves state-of-the-art results on the test set of Aishell1, cochlscene, ClothoAQA, and VocalSound.\n- Flexible multi-run chat from audio and text input: Qwen-Audio supports multiple-audio analysis, sound understading and reasoning, music appreciation, and tool usage for speech editing.\n\nQwen-Audio 是阿里云研发的大规模音频语言模型(Large Audio Language Model)。Qwen-Audio 可以以多种音频 (包括说话人语音、自然音、音乐、歌声)和文本作为输入,并以文本作为输出。Qwen-Audio 系列模型的特点包括:\n\n- 音频基石模型:Qwen-Audio是一个性能卓越的通用的音频理解模型,支持各种任务、语言和音频类型。在Qwen-Audio的基础上,我们通过指令微调开发了Qwen-Audio-Chat,支持多轮、多语言、多语言对话。Qwen-Audio和Qwen-Audio-Chat模型均已开源。\n- 兼容多种复杂音频的多任务学习框架:为了避免由于数据收集来源不同以及任务类型不同,带来的音频到文本的一对多的干扰问题,我们提出了一种多任务训练框架,实现相似任务的知识共享,并尽可能减少不同任务之间的干扰。通过提出的框架,Qwen-Audio可以容纳训练超过30多种不同的音频任务;\n- 出色的性能:Qwen-Audio在不需要任何任务特定的微调的情况下,在各种基准任务上取得了领先的结果。具体得,Qwen-Audio在Aishell1、cochlscene、ClothoAQA和VocalSound的测试集上都达到了SOTA;\n- 支持多轮音频和文本对话,支持各种语音场景:Qwen-Audio-Chat支持声音理解和推理、音乐欣赏、多音频分析、多轮音频-文本交错对话以及外部语音工具的使用(如语音编辑)。\n\n\nWe release Qwen-Audio and Qwen-Audio-Chat, which are pretrained model and Chat model respectively. For more details about Qwen-Audio, please refer to our Github Repo. This repo is the one for Qwen-Audio.\n<br>\n\n目前,我们提供了Qwen-Audio和Qwen-Audio-Chat两个模型,分别为预训练模型和Chat模型。如果想了解更多关于信息,请点击链接查看Github仓库。本仓库为Qwen-Audio仓库。", "## Requirements\n* python 3.8 and above\n* pytorch 1.12 and above, 2.0 and above are recommended\n* CUDA 11.4 and above are recommended (this is for GPU users)\n* FFmpeg\n <br>", "## Quickstart\nBelow, we provide simple examples to show how to use Qwen-Audio with Transformers.\n\nBefore running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.\n\n\nFor more details, please refer to tutorial.", "#### Transformers\n\nTo use Qwen-Audio for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, please make sure that you are using the latest code.", "## License Agreement\nResearchers and developers are free to use the codes and model weights of Qwen-Audio. We also allow its commercial use. Check our license at LICENSE for more details.\n<br>\n\nIf you find our paper and code useful in your research, please consider giving a star and citation\n\n\n<br>", "## Contact Us\n\nIf you are interested to leave a message to either our research team or product team, feel free to send an email to qianwen_opensource@URL." ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mental_health_model_long This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6509 - Accuracy: 0.7423 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 0.7146 | 1.0 | 1077 | 0.7423 | 0.6509 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "allenai/longformer-base-4096", "model-index": [{"name": "mental_health_model_long", "results": []}]}
AlexanderHolmes0/mental_health_model_long
null
[ "transformers", "tensorboard", "safetensors", "longformer", "text-classification", "generated_from_trainer", "base_model:allenai/longformer-base-4096", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T16:05:43+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #longformer #text-classification #generated_from_trainer #base_model-allenai/longformer-base-4096 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
mental\_health\_model\_long =========================== This model is a fine-tuned version of allenai/longformer-base-4096 on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.6509 * Accuracy: 0.7423 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #longformer #text-classification #generated_from_trainer #base_model-allenai/longformer-base-4096 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
null
# M7Percival_01-7B M7Percival_01-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 - model: liminerity/M7-7b - model: AurelPx/Percival_01-7b-slerp merge_method: model_stock base_model: mistralai/Mistral-7B-v0.1 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/M7Percival_01-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
automerger/M7Percival_01-7B
null
[ "merge", "mergekit", "lazymergekit", "automerger", "license:apache-2.0", "region:us" ]
null
2024-04-25T16:06:21+00:00
[]
[]
TAGS #merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
# M7Percival_01-7B M7Percival_01-7B is an automated merge created by Maxime Labonne using the following configuration. ## Configuration ## Usage
[ "# M7Percival_01-7B\n\nM7Percival_01-7B is an automated merge created by Maxime Labonne using the following configuration.", "## Configuration", "## Usage" ]
[ "TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n", "# M7Percival_01-7B\n\nM7Percival_01-7B is an automated merge created by Maxime Labonne using the following configuration.", "## Configuration", "## Usage" ]
sentence-similarity
sentence-transformers
# SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained on the [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) dataset. It maps sentences & paragraphs to a 2048-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 1000000 tokens - **Output Dimensionality:** 2048 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(400001, 300) ) (1): LSTM( (encoder): LSTM(300, 1024, batch_first=True, bidirectional=True) ) (2): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("tomaarsen/glove-bilstm-sts") # Run inference sentences = [ 'a woman has a child.', 'A pregnant woman is in labor', 'Some cyclists stop near a sign.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 2048] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.7709 | | **spearman_cosine** | **0.7658** | | pearson_manhattan | 0.7475 | | spearman_manhattan | 0.7523 | | pearson_euclidean | 0.7489 | | spearman_euclidean | 0.7541 | | pearson_dot | 0.6125 | | spearman_dot | 0.6662 | | pearson_max | 0.7709 | | spearman_max | 0.7658 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### sentence-transformers/stsb * Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [d999f12](https://huggingface.co/datasets/sentence-transformers/stsb/tree/d999f12281623b0925506817d9bd85e88289218a) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 1 tokens</li><li>mean: 3.38 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 1 tokens</li><li>mean: 3.39 tokens</li><li>max: 10 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------| | <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> | | <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> | | <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Evaluation Dataset #### sentence-transformers/stsb * Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [d999f12](https://huggingface.co/datasets/sentence-transformers/stsb/tree/d999f12281623b0925506817d9bd85e88289218a) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 1 tokens</li><li>mean: 5.17 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 1 tokens</li><li>mean: 5.08 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------|:------------------------------------------------------|:------------------| | <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> | | <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> | | <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: False - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: None - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine | |:------:|:----:|:-------------:|:------:|:-----------------------:| | 0.5556 | 100 | 0.0809 | 0.0566 | 0.7658 | ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.000 kWh - **Carbon Emitted**: 0.000 kg of CO2 - **Hours Used**: 0.003 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 3.0.0.dev0 - Transformers: 4.41.0.dev0 - PyTorch: 2.3.0+cu121 - Accelerate: 0.26.1 - Datasets: 2.18.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"language": ["en"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "loss:CosineSimilarityLoss"], "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "widget": [{"source_sentence": "A man is spitting.", "sentences": ["A man is crying.", "Bombings kill 19 people in Iraq", "Three women are sitting near a wall."]}, {"source_sentence": "A plane in the sky.", "sentences": ["Two airplanes in the sky.", "Suicide bomber strikes in Syria", "Two women posing with a baby."]}, {"source_sentence": "A woman is reading.", "sentences": ["A woman is writing something.", "Some cyclists stop near a sign.", "Someone is greating a carrot."]}, {"source_sentence": "A man is speaking.", "sentences": ["A man is talking.", "Bombings kill 19 people in Iraq", "Kittens are eating food on trays."]}, {"source_sentence": "a woman has a child.", "sentences": ["A pregnant woman is in labor", "Some cyclists stop near a sign.", "Someone is stirring chili in a kettle."]}], "pipeline_tag": "sentence-similarity", "co2_eq_emissions": {"emissions": 0.17244918455341185, "energy_consumed": 0.0004436539677012515, "source": "codecarbon", "training_type": "fine-tuning", "on_cloud": false, "cpu_model": "13th Gen Intel(R) Core(TM) i7-13700K", "ram_total_size": 31.777088165283203, "hours_used": 0.003, "hardware_used": "1 x NVIDIA GeForce RTX 3090"}, "model-index": [{"name": "SentenceTransformer", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts dev", "type": "sts-dev"}, "metrics": [{"type": "pearson_cosine", "value": 0.7708672762349984, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.7657600316758283, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.7474564039693722, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.75228158575576, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.7489387720530025, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.7541126864285251, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.6124844196169514, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.6662313602123413, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.7708672762349984, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.7657600316758283, "name": "Spearman Max"}]}]}]}
tomaarsen/glove-bilstm-sts
null
[ "sentence-transformers", "sentence-similarity", "feature-extraction", "loss:CosineSimilarityLoss", "en", "arxiv:1908.10084", "model-index", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2024-04-25T16:06:27+00:00
[ "1908.10084" ]
[ "en" ]
TAGS #sentence-transformers #sentence-similarity #feature-extraction #loss-CosineSimilarityLoss #en #arxiv-1908.10084 #model-index #co2_eq_emissions #endpoints_compatible #region-us
SentenceTransformer =================== This is a sentence-transformers model trained on the sentence-transformers/stsb dataset. It maps sentences & paragraphs to a 2048-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. Model Details ------------- ### Model Description * Model Type: Sentence Transformer * Maximum Sequence Length: 1000000 tokens * Output Dimensionality: 2048 tokens * Similarity Function: Cosine Similarity * Training Dataset: + sentence-transformers/stsb * Language: en ### Model Sources * Documentation: Sentence Transformers Documentation * Repository: Sentence Transformers on GitHub * Hugging Face: Sentence Transformers on Hugging Face ### Full Model Architecture Usage ----- ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: Then you can load this model and run inference. Evaluation ---------- ### Metrics #### Semantic Similarity * Dataset: 'sts-dev' * Evaluated with `EmbeddingSimilarityEvaluator` Training Details ---------------- ### Training Dataset #### sentence-transformers/stsb * Dataset: sentence-transformers/stsb at d999f12 * Size: 5,749 training samples * Columns: `sentence1`, `sentence2`, and `score` * Approximate statistics based on the first 1000 samples: * Samples: * Loss: `CosineSimilarityLoss` with these parameters: ### Evaluation Dataset #### sentence-transformers/stsb * Dataset: sentence-transformers/stsb at d999f12 * Size: 1,500 evaluation samples * Columns: `sentence1`, `sentence2`, and `score` * Approximate statistics based on the first 1000 samples: * Samples: * Loss: `CosineSimilarityLoss` with these parameters: ### Training Hyperparameters #### Non-Default Hyperparameters * 'eval\_strategy': steps * 'per\_device\_train\_batch\_size': 32 * 'per\_device\_eval\_batch\_size': 32 * 'num\_train\_epochs': 1 * 'warmup\_ratio': 0.1 * 'fp16': True #### All Hyperparameters Click to expand * 'overwrite\_output\_dir': False * 'do\_predict': False * 'eval\_strategy': steps * 'prediction\_loss\_only': False * 'per\_device\_train\_batch\_size': 32 * 'per\_device\_eval\_batch\_size': 32 * 'per\_gpu\_train\_batch\_size': None * 'per\_gpu\_eval\_batch\_size': None * 'gradient\_accumulation\_steps': 1 * 'eval\_accumulation\_steps': None * 'learning\_rate': 5e-05 * 'weight\_decay': 0.0 * 'adam\_beta1': 0.9 * 'adam\_beta2': 0.999 * 'adam\_epsilon': 1e-08 * 'max\_grad\_norm': 1.0 * 'num\_train\_epochs': 1 * 'max\_steps': -1 * 'lr\_scheduler\_type': linear * 'lr\_scheduler\_kwargs': {} * 'warmup\_ratio': 0.1 * 'warmup\_steps': 0 * 'log\_level': passive * 'log\_level\_replica': warning * 'log\_on\_each\_node': True * 'logging\_nan\_inf\_filter': True * 'save\_safetensors': True * 'save\_on\_each\_node': False * 'save\_only\_model': False * 'no\_cuda': False * 'use\_cpu': False * 'use\_mps\_device': False * 'seed': 42 * 'data\_seed': None * 'jit\_mode\_eval': False * 'use\_ipex': False * 'bf16': False * 'fp16': True * 'fp16\_opt\_level': O1 * 'half\_precision\_backend': auto * 'bf16\_full\_eval': False * 'fp16\_full\_eval': False * 'tf32': None * 'local\_rank': 0 * 'ddp\_backend': None * 'tpu\_num\_cores': None * 'tpu\_metrics\_debug': False * 'debug': [] * 'dataloader\_drop\_last': False * 'dataloader\_num\_workers': 0 * 'dataloader\_prefetch\_factor': None * 'past\_index': -1 * 'disable\_tqdm': False * 'remove\_unused\_columns': True * 'label\_names': None * 'load\_best\_model\_at\_end': False * 'ignore\_data\_skip': False * 'fsdp': [] * 'fsdp\_min\_num\_params': 0 * 'fsdp\_config': {'min\_num\_params': 0, 'xla': False, 'xla\_fsdp\_v2': False, 'xla\_fsdp\_grad\_ckpt': False} * 'fsdp\_transformer\_layer\_cls\_to\_wrap': None * 'accelerator\_config': {'split\_batches': False, 'dispatch\_batches': None, 'even\_batches': True, 'use\_seedable\_sampler': True, 'non\_blocking': False, 'gradient\_accumulation\_kwargs': None} * 'deepspeed': None * 'label\_smoothing\_factor': 0.0 * 'optim': adamw\_torch * 'optim\_args': None * 'adafactor': False * 'group\_by\_length': False * 'length\_column\_name': length * 'ddp\_find\_unused\_parameters': None * 'ddp\_bucket\_cap\_mb': None * 'ddp\_broadcast\_buffers': None * 'dataloader\_pin\_memory': True * 'dataloader\_persistent\_workers': False * 'skip\_memory\_metrics': True * 'use\_legacy\_prediction\_loop': False * 'push\_to\_hub': False * 'resume\_from\_checkpoint': None * 'hub\_model\_id': None * 'hub\_strategy': every\_save * 'hub\_private\_repo': False * 'hub\_always\_push': False * 'gradient\_checkpointing': False * 'gradient\_checkpointing\_kwargs': None * 'include\_inputs\_for\_metrics': False * 'eval\_do\_concat\_batches': True * 'fp16\_backend': auto * 'push\_to\_hub\_model\_id': None * 'push\_to\_hub\_organization': None * 'mp\_parameters': * 'auto\_find\_batch\_size': False * 'full\_determinism': False * 'torchdynamo': None * 'ray\_scope': last * 'ddp\_timeout': 1800 * 'torch\_compile': False * 'torch\_compile\_backend': None * 'torch\_compile\_mode': None * 'dispatch\_batches': None * 'split\_batches': None * 'include\_tokens\_per\_second': False * 'include\_num\_input\_tokens\_seen': False * 'neftune\_noise\_alpha': None * 'optim\_target\_modules': None * 'batch\_sampler': batch\_sampler * 'multi\_dataset\_batch\_sampler': proportional ### Training Logs ### Environmental Impact Carbon emissions were measured using CodeCarbon. * Energy Consumed: 0.000 kWh * Carbon Emitted: 0.000 kg of CO2 * Hours Used: 0.003 hours ### Training Hardware * On Cloud: No * GPU Model: 1 x NVIDIA GeForce RTX 3090 * CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K * RAM Size: 31.78 GB ### Framework Versions * Python: 3.11.6 * Sentence Transformers: 3.0.0.dev0 * Transformers: 4.41.0.dev0 * PyTorch: 2.3.0+cu121 * Accelerate: 0.26.1 * Datasets: 2.18.0 * Tokenizers: 0.19.1 ### BibTeX #### Sentence Transformers
[ "### Model Description\n\n\n* Model Type: Sentence Transformer\n* Maximum Sequence Length: 1000000 tokens\n* Output Dimensionality: 2048 tokens\n* Similarity Function: Cosine Similarity\n* Training Dataset:\n\n\n\t+ sentence-transformers/stsb\n* Language: en", "### Model Sources\n\n\n* Documentation: Sentence Transformers Documentation\n* Repository: Sentence Transformers on GitHub\n* Hugging Face: Sentence Transformers on Hugging Face", "### Full Model Architecture\n\n\nUsage\n-----", "### Direct Usage (Sentence Transformers)\n\n\nFirst install the Sentence Transformers library:\n\n\nThen you can load this model and run inference.\n\n\nEvaluation\n----------", "### Metrics", "#### Semantic Similarity\n\n\n* Dataset: 'sts-dev'\n* Evaluated with `EmbeddingSimilarityEvaluator`\n\n\n\nTraining Details\n----------------", "### Training Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 5,749 training samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Evaluation Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 1,500 evaluation samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Training Hyperparameters", "#### Non-Default Hyperparameters\n\n\n* 'eval\\_strategy': steps\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'num\\_train\\_epochs': 1\n* 'warmup\\_ratio': 0.1\n* 'fp16': True", "#### All Hyperparameters\n\n\nClick to expand\n* 'overwrite\\_output\\_dir': False\n* 'do\\_predict': False\n* 'eval\\_strategy': steps\n* 'prediction\\_loss\\_only': False\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'per\\_gpu\\_train\\_batch\\_size': None\n* 'per\\_gpu\\_eval\\_batch\\_size': None\n* 'gradient\\_accumulation\\_steps': 1\n* 'eval\\_accumulation\\_steps': None\n* 'learning\\_rate': 5e-05\n* 'weight\\_decay': 0.0\n* 'adam\\_beta1': 0.9\n* 'adam\\_beta2': 0.999\n* 'adam\\_epsilon': 1e-08\n* 'max\\_grad\\_norm': 1.0\n* 'num\\_train\\_epochs': 1\n* 'max\\_steps': -1\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_kwargs': {}\n* 'warmup\\_ratio': 0.1\n* 'warmup\\_steps': 0\n* 'log\\_level': passive\n* 'log\\_level\\_replica': warning\n* 'log\\_on\\_each\\_node': True\n* 'logging\\_nan\\_inf\\_filter': True\n* 'save\\_safetensors': True\n* 'save\\_on\\_each\\_node': False\n* 'save\\_only\\_model': False\n* 'no\\_cuda': False\n* 'use\\_cpu': False\n* 'use\\_mps\\_device': False\n* 'seed': 42\n* 'data\\_seed': None\n* 'jit\\_mode\\_eval': False\n* 'use\\_ipex': False\n* 'bf16': False\n* 'fp16': True\n* 'fp16\\_opt\\_level': O1\n* 'half\\_precision\\_backend': auto\n* 'bf16\\_full\\_eval': False\n* 'fp16\\_full\\_eval': False\n* 'tf32': None\n* 'local\\_rank': 0\n* 'ddp\\_backend': None\n* 'tpu\\_num\\_cores': None\n* 'tpu\\_metrics\\_debug': False\n* 'debug': []\n* 'dataloader\\_drop\\_last': False\n* 'dataloader\\_num\\_workers': 0\n* 'dataloader\\_prefetch\\_factor': None\n* 'past\\_index': -1\n* 'disable\\_tqdm': False\n* 'remove\\_unused\\_columns': True\n* 'label\\_names': None\n* 'load\\_best\\_model\\_at\\_end': False\n* 'ignore\\_data\\_skip': False\n* 'fsdp': []\n* 'fsdp\\_min\\_num\\_params': 0\n* 'fsdp\\_config': {'min\\_num\\_params': 0, 'xla': False, 'xla\\_fsdp\\_v2': False, 'xla\\_fsdp\\_grad\\_ckpt': False}\n* 'fsdp\\_transformer\\_layer\\_cls\\_to\\_wrap': None\n* 'accelerator\\_config': {'split\\_batches': False, 'dispatch\\_batches': None, 'even\\_batches': True, 'use\\_seedable\\_sampler': True, 'non\\_blocking': False, 'gradient\\_accumulation\\_kwargs': None}\n* 'deepspeed': None\n* 'label\\_smoothing\\_factor': 0.0\n* 'optim': adamw\\_torch\n* 'optim\\_args': None\n* 'adafactor': False\n* 'group\\_by\\_length': False\n* 'length\\_column\\_name': length\n* 'ddp\\_find\\_unused\\_parameters': None\n* 'ddp\\_bucket\\_cap\\_mb': None\n* 'ddp\\_broadcast\\_buffers': None\n* 'dataloader\\_pin\\_memory': True\n* 'dataloader\\_persistent\\_workers': False\n* 'skip\\_memory\\_metrics': True\n* 'use\\_legacy\\_prediction\\_loop': False\n* 'push\\_to\\_hub': False\n* 'resume\\_from\\_checkpoint': None\n* 'hub\\_model\\_id': None\n* 'hub\\_strategy': every\\_save\n* 'hub\\_private\\_repo': False\n* 'hub\\_always\\_push': False\n* 'gradient\\_checkpointing': False\n* 'gradient\\_checkpointing\\_kwargs': None\n* 'include\\_inputs\\_for\\_metrics': False\n* 'eval\\_do\\_concat\\_batches': True\n* 'fp16\\_backend': auto\n* 'push\\_to\\_hub\\_model\\_id': None\n* 'push\\_to\\_hub\\_organization': None\n* 'mp\\_parameters':\n* 'auto\\_find\\_batch\\_size': False\n* 'full\\_determinism': False\n* 'torchdynamo': None\n* 'ray\\_scope': last\n* 'ddp\\_timeout': 1800\n* 'torch\\_compile': False\n* 'torch\\_compile\\_backend': None\n* 'torch\\_compile\\_mode': None\n* 'dispatch\\_batches': None\n* 'split\\_batches': None\n* 'include\\_tokens\\_per\\_second': False\n* 'include\\_num\\_input\\_tokens\\_seen': False\n* 'neftune\\_noise\\_alpha': None\n* 'optim\\_target\\_modules': None\n* 'batch\\_sampler': batch\\_sampler\n* 'multi\\_dataset\\_batch\\_sampler': proportional", "### Training Logs", "### Environmental Impact\n\n\nCarbon emissions were measured using CodeCarbon.\n\n\n* Energy Consumed: 0.000 kWh\n* Carbon Emitted: 0.000 kg of CO2\n* Hours Used: 0.003 hours", "### Training Hardware\n\n\n* On Cloud: No\n* GPU Model: 1 x NVIDIA GeForce RTX 3090\n* CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K\n* RAM Size: 31.78 GB", "### Framework Versions\n\n\n* Python: 3.11.6\n* Sentence Transformers: 3.0.0.dev0\n* Transformers: 4.41.0.dev0\n* PyTorch: 2.3.0+cu121\n* Accelerate: 0.26.1\n* Datasets: 2.18.0\n* Tokenizers: 0.19.1", "### BibTeX", "#### Sentence Transformers" ]
[ "TAGS\n#sentence-transformers #sentence-similarity #feature-extraction #loss-CosineSimilarityLoss #en #arxiv-1908.10084 #model-index #co2_eq_emissions #endpoints_compatible #region-us \n", "### Model Description\n\n\n* Model Type: Sentence Transformer\n* Maximum Sequence Length: 1000000 tokens\n* Output Dimensionality: 2048 tokens\n* Similarity Function: Cosine Similarity\n* Training Dataset:\n\n\n\t+ sentence-transformers/stsb\n* Language: en", "### Model Sources\n\n\n* Documentation: Sentence Transformers Documentation\n* Repository: Sentence Transformers on GitHub\n* Hugging Face: Sentence Transformers on Hugging Face", "### Full Model Architecture\n\n\nUsage\n-----", "### Direct Usage (Sentence Transformers)\n\n\nFirst install the Sentence Transformers library:\n\n\nThen you can load this model and run inference.\n\n\nEvaluation\n----------", "### Metrics", "#### Semantic Similarity\n\n\n* Dataset: 'sts-dev'\n* Evaluated with `EmbeddingSimilarityEvaluator`\n\n\n\nTraining Details\n----------------", "### Training Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 5,749 training samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Evaluation Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 1,500 evaluation samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Training Hyperparameters", "#### Non-Default Hyperparameters\n\n\n* 'eval\\_strategy': steps\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'num\\_train\\_epochs': 1\n* 'warmup\\_ratio': 0.1\n* 'fp16': True", "#### All Hyperparameters\n\n\nClick to expand\n* 'overwrite\\_output\\_dir': False\n* 'do\\_predict': False\n* 'eval\\_strategy': steps\n* 'prediction\\_loss\\_only': False\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'per\\_gpu\\_train\\_batch\\_size': None\n* 'per\\_gpu\\_eval\\_batch\\_size': None\n* 'gradient\\_accumulation\\_steps': 1\n* 'eval\\_accumulation\\_steps': None\n* 'learning\\_rate': 5e-05\n* 'weight\\_decay': 0.0\n* 'adam\\_beta1': 0.9\n* 'adam\\_beta2': 0.999\n* 'adam\\_epsilon': 1e-08\n* 'max\\_grad\\_norm': 1.0\n* 'num\\_train\\_epochs': 1\n* 'max\\_steps': -1\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_kwargs': {}\n* 'warmup\\_ratio': 0.1\n* 'warmup\\_steps': 0\n* 'log\\_level': passive\n* 'log\\_level\\_replica': warning\n* 'log\\_on\\_each\\_node': True\n* 'logging\\_nan\\_inf\\_filter': True\n* 'save\\_safetensors': True\n* 'save\\_on\\_each\\_node': False\n* 'save\\_only\\_model': False\n* 'no\\_cuda': False\n* 'use\\_cpu': False\n* 'use\\_mps\\_device': False\n* 'seed': 42\n* 'data\\_seed': None\n* 'jit\\_mode\\_eval': False\n* 'use\\_ipex': False\n* 'bf16': False\n* 'fp16': True\n* 'fp16\\_opt\\_level': O1\n* 'half\\_precision\\_backend': auto\n* 'bf16\\_full\\_eval': False\n* 'fp16\\_full\\_eval': False\n* 'tf32': None\n* 'local\\_rank': 0\n* 'ddp\\_backend': None\n* 'tpu\\_num\\_cores': None\n* 'tpu\\_metrics\\_debug': False\n* 'debug': []\n* 'dataloader\\_drop\\_last': False\n* 'dataloader\\_num\\_workers': 0\n* 'dataloader\\_prefetch\\_factor': None\n* 'past\\_index': -1\n* 'disable\\_tqdm': False\n* 'remove\\_unused\\_columns': True\n* 'label\\_names': None\n* 'load\\_best\\_model\\_at\\_end': False\n* 'ignore\\_data\\_skip': False\n* 'fsdp': []\n* 'fsdp\\_min\\_num\\_params': 0\n* 'fsdp\\_config': {'min\\_num\\_params': 0, 'xla': False, 'xla\\_fsdp\\_v2': False, 'xla\\_fsdp\\_grad\\_ckpt': False}\n* 'fsdp\\_transformer\\_layer\\_cls\\_to\\_wrap': None\n* 'accelerator\\_config': {'split\\_batches': False, 'dispatch\\_batches': None, 'even\\_batches': True, 'use\\_seedable\\_sampler': True, 'non\\_blocking': False, 'gradient\\_accumulation\\_kwargs': None}\n* 'deepspeed': None\n* 'label\\_smoothing\\_factor': 0.0\n* 'optim': adamw\\_torch\n* 'optim\\_args': None\n* 'adafactor': False\n* 'group\\_by\\_length': False\n* 'length\\_column\\_name': length\n* 'ddp\\_find\\_unused\\_parameters': None\n* 'ddp\\_bucket\\_cap\\_mb': None\n* 'ddp\\_broadcast\\_buffers': None\n* 'dataloader\\_pin\\_memory': True\n* 'dataloader\\_persistent\\_workers': False\n* 'skip\\_memory\\_metrics': True\n* 'use\\_legacy\\_prediction\\_loop': False\n* 'push\\_to\\_hub': False\n* 'resume\\_from\\_checkpoint': None\n* 'hub\\_model\\_id': None\n* 'hub\\_strategy': every\\_save\n* 'hub\\_private\\_repo': False\n* 'hub\\_always\\_push': False\n* 'gradient\\_checkpointing': False\n* 'gradient\\_checkpointing\\_kwargs': None\n* 'include\\_inputs\\_for\\_metrics': False\n* 'eval\\_do\\_concat\\_batches': True\n* 'fp16\\_backend': auto\n* 'push\\_to\\_hub\\_model\\_id': None\n* 'push\\_to\\_hub\\_organization': None\n* 'mp\\_parameters':\n* 'auto\\_find\\_batch\\_size': False\n* 'full\\_determinism': False\n* 'torchdynamo': None\n* 'ray\\_scope': last\n* 'ddp\\_timeout': 1800\n* 'torch\\_compile': False\n* 'torch\\_compile\\_backend': None\n* 'torch\\_compile\\_mode': None\n* 'dispatch\\_batches': None\n* 'split\\_batches': None\n* 'include\\_tokens\\_per\\_second': False\n* 'include\\_num\\_input\\_tokens\\_seen': False\n* 'neftune\\_noise\\_alpha': None\n* 'optim\\_target\\_modules': None\n* 'batch\\_sampler': batch\\_sampler\n* 'multi\\_dataset\\_batch\\_sampler': proportional", "### Training Logs", "### Environmental Impact\n\n\nCarbon emissions were measured using CodeCarbon.\n\n\n* Energy Consumed: 0.000 kWh\n* Carbon Emitted: 0.000 kg of CO2\n* Hours Used: 0.003 hours", "### Training Hardware\n\n\n* On Cloud: No\n* GPU Model: 1 x NVIDIA GeForce RTX 3090\n* CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K\n* RAM Size: 31.78 GB", "### Framework Versions\n\n\n* Python: 3.11.6\n* Sentence Transformers: 3.0.0.dev0\n* Transformers: 4.41.0.dev0\n* PyTorch: 2.3.0+cu121\n* Accelerate: 0.26.1\n* Datasets: 2.18.0\n* Tokenizers: 0.19.1", "### BibTeX", "#### Sentence Transformers" ]
text-generation
transformers
--- null # Uploaded model - **Developed by:** Ramikan-BR - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "tinyllamacoder-py", "coder-py", "coder"], "base_model": "unsloth/tinyllama-bnb-4bit"}
Ramikan-BR/tinyllama_PY-CODER-bnb-4bit-lora_model-4k
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "tinyllamacoder-py", "coder-py", "coder", "en", "base_model:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "4-bit", "region:us" ]
null
2024-04-25T16:06:30+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #tinyllamacoder-py #coder-py #coder #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #4-bit #region-us
--- null # Uploaded model - Developed by: Ramikan-BR - License: apache-2.0 - Finetuned from model : unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: Ramikan-BR\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #tinyllamacoder-py #coder-py #coder #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #4-bit #region-us \n", "# Uploaded model\n\n- Developed by: Ramikan-BR\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Entreprenerdly/blip2-opt-2.7b-football-captions-adapters
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T16:14:30+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# Antler-7B-Novel-Writing-GGUF ## 概要 [Aratako/SniffyOtter-7B-Novel-Writing-NSFW](https://huggingface.co/Aratako/SniffyOtter-7B-Novel-Writing-NSFW)の量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。
{"language": ["ja"], "license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw"], "datasets": ["Aratako/Syosetu711K-Cleaned-158K-Instruct"], "base_model": ["Aratako/SniffyOtter-7B-Novel-Writing-NSFW"]}
Aratako/SniffyOtter-7B-Novel-Writing-NSFW-GGUF
null
[ "gguf", "not-for-all-audiences", "nsfw", "ja", "dataset:Aratako/Syosetu711K-Cleaned-158K-Instruct", "base_model:Aratako/SniffyOtter-7B-Novel-Writing-NSFW", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-25T16:15:04+00:00
[]
[ "ja" ]
TAGS #gguf #not-for-all-audiences #nsfw #ja #dataset-Aratako/Syosetu711K-Cleaned-158K-Instruct #base_model-Aratako/SniffyOtter-7B-Novel-Writing-NSFW #license-cc-by-nc-4.0 #region-us
# Antler-7B-Novel-Writing-GGUF ## 概要 Aratako/SniffyOtter-7B-Novel-Writing-NSFWの量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。
[ "# Antler-7B-Novel-Writing-GGUF", "## 概要\nAratako/SniffyOtter-7B-Novel-Writing-NSFWの量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。" ]
[ "TAGS\n#gguf #not-for-all-audiences #nsfw #ja #dataset-Aratako/Syosetu711K-Cleaned-158K-Instruct #base_model-Aratako/SniffyOtter-7B-Novel-Writing-NSFW #license-cc-by-nc-4.0 #region-us \n", "# Antler-7B-Novel-Writing-GGUF", "## 概要\nAratako/SniffyOtter-7B-Novel-Writing-NSFWの量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。" ]
text-generation
transformers
# Model Card for CBTLlama: Fine Tuning LLaMA for CBT Thought Distortions ## Model Details ### Model Description Developed by David Schiff, this Hugging Face transformers model, dubbed CBTLlama, is fine-tuned on the LLaMA-3 8B architecture. It is specifically tailored to enhance Cognitive Behavioral Therapy (CBT) by detecting thought distortions and raising possible challenges for them. The model is trained on synthetic data generated by claude that includes a variety of different demographic and emotional states to produce CBT scenarios, aiming to make CBT more accessible and effective. This model is not inteded to use without any professional assistance! ## Disclaimer ### Limitation of Liability The developer of CBTLlama ("the model") provides this model on an "AS IS" basis and makes no warranties regarding its performance, accuracy, reliability, or suitability for any particular task or to achieve any specific results. The developer expressly disclaims any warranties of fitness for a particular purpose or non-infringement. In no event shall the developer be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this model, even if advised of the possibility of such damage. This model is not intended to be a substitute for professional advice, diagnosis, or treatment. Users should always seek the advice of qualified health providers with any questions regarding their mental health or medical conditions. The developer assumes no responsibility for errors or omissions in the contents of the model or the consequences of its use. - **Developed by:** David Schiff - **Model type:** Fine-tuned LLaMA-3 8B - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model:** LLaMA-3 8B ### Model Sources - **Repository:** (URL to GitHub or similar) - **Paper [optional]:** (Link to any published research or documentation) - **Demo [optional]:** (Link to a model demonstration or interactive API) ## Uses ### Direct Use CBTLlama is intended to be used directly by mental health practitioners to train their patients in identifying cognitive distortions and challenging them. ### Downstream Use While primarily designed for CBT, this model could be extended to other forms of therapy that require scenario generation or tailored mental health interventions. ### Out-of-Scope Use This model is not intended to replace therapists or make clinical decisions. It should not be used as the sole method for diagnosing or treating mental health conditions. ## Bias, Risks, and Limitations The model might exhibit biases based on the demographic data it was trained on. Users should critically assess the scenarios it generates, especially when using the model with diverse populations. ### Recommendations It is recommended that all outputs be reviewed by qualified professionals to ensure they are appropriate and sensitive to individual circumstances. ## How to Get Started with the Model To start using CBTLlama, you can access the model via the Hugging Face API or download it directly from the repository. ## Training Details ### Training Data The training data comprised simulated CBT scenarios generated by Claude, based on diverse demographic profiles and emotional states, ensuring broad coverage of potential therapy situations. dataset: https://huggingface.co/datasets/epsilon3/cbt-triggering-events https://huggingface.co/datasets/epsilon3/cbt-cognitive-distortions-analysis ### Training Procedure #### Training Hyperparameters - **Training regime:** Mixed precision training for efficiency ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Metrics ### Results Results indicated that CBTLlama produces highly accurate detections and challenges of thought distortions. #### Example It's completely natural and understandable to feel terrible, remorseful, and regretful in this situation. Letting your team down and causing them unnecessary stress due to procrastination is a difficult thing to cope with, and your emotions are valid. It's important to acknowledge and process these feelings as part of learning and growth. Here are some potential cognitive distortions and challenges based on the situation you described: * Overgeneralization: "I always let my team down and cause them stress." Challenge: Recognize that this specific incident does not define a permanent pattern. Consider times when you have been responsible and met deadlines on time, which discredits the overgeneralization. * Labeling: "I'm a terrible team member for letting this happen." Challenge: Avoid defining yourself based on a single event. You are a complex individual with many qualities, and one mistake does not negate your overall value as a team member. * Magnification (catastrophizing): "This one mistake ruins everything and makes me a failure." Challenge: Put the situation into perspective. While it was an important deadline, it does not negate all your other contributions and successes. Consider how much this specific incident will matter in the long run. * Should statements: "I should have managed my time better" or "I shouldn't have let this happen." Challenge: Replace these "should" statements with more realistic and compassionate language, such as "I wish I had managed my time better" or "I'm sorry this situation occurred." Recognize that everyone makes mistakes and that being hard on yourself is not productive. Remember, everyone faces challenges and makes mistakes from time to time. The most important thing is to learn from this experience, take responsibility for your actions, and find ways to prevent similar situations in the future. Be kind to yourself and focus on moving forward productively. ## Technical Specifications ### Model Architecture and Objective The model utilizes the LLaMA-3 architecture with modifications to specifically suit CBT cognitive distortions analysis ## Citation CBTLlama: Fine Tuning Large Language Models For Identifying Thought Distortions David Schiff [email protected]
{"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]}
epsilon3/cbt-llama3-8b-finetuned
null
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "trl", "sft", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T16:16:40+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #unsloth #trl #sft #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for CBTLlama: Fine Tuning LLaMA for CBT Thought Distortions ## Model Details ### Model Description Developed by David Schiff, this Hugging Face transformers model, dubbed CBTLlama, is fine-tuned on the LLaMA-3 8B architecture. It is specifically tailored to enhance Cognitive Behavioral Therapy (CBT) by detecting thought distortions and raising possible challenges for them. The model is trained on synthetic data generated by claude that includes a variety of different demographic and emotional states to produce CBT scenarios, aiming to make CBT more accessible and effective. This model is not inteded to use without any professional assistance! ## Disclaimer ### Limitation of Liability The developer of CBTLlama ("the model") provides this model on an "AS IS" basis and makes no warranties regarding its performance, accuracy, reliability, or suitability for any particular task or to achieve any specific results. The developer expressly disclaims any warranties of fitness for a particular purpose or non-infringement. In no event shall the developer be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this model, even if advised of the possibility of such damage. This model is not intended to be a substitute for professional advice, diagnosis, or treatment. Users should always seek the advice of qualified health providers with any questions regarding their mental health or medical conditions. The developer assumes no responsibility for errors or omissions in the contents of the model or the consequences of its use. - Developed by: David Schiff - Model type: Fine-tuned LLaMA-3 8B - Language(s) (NLP): English - License: MIT - Finetuned from model: LLaMA-3 8B ### Model Sources - Repository: (URL to GitHub or similar) - Paper [optional]: (Link to any published research or documentation) - Demo [optional]: (Link to a model demonstration or interactive API) ## Uses ### Direct Use CBTLlama is intended to be used directly by mental health practitioners to train their patients in identifying cognitive distortions and challenging them. ### Downstream Use While primarily designed for CBT, this model could be extended to other forms of therapy that require scenario generation or tailored mental health interventions. ### Out-of-Scope Use This model is not intended to replace therapists or make clinical decisions. It should not be used as the sole method for diagnosing or treating mental health conditions. ## Bias, Risks, and Limitations The model might exhibit biases based on the demographic data it was trained on. Users should critically assess the scenarios it generates, especially when using the model with diverse populations. ### Recommendations It is recommended that all outputs be reviewed by qualified professionals to ensure they are appropriate and sensitive to individual circumstances. ## How to Get Started with the Model To start using CBTLlama, you can access the model via the Hugging Face API or download it directly from the repository. ## Training Details ### Training Data The training data comprised simulated CBT scenarios generated by Claude, based on diverse demographic profiles and emotional states, ensuring broad coverage of potential therapy situations. dataset: URL URL ### Training Procedure #### Training Hyperparameters - Training regime: Mixed precision training for efficiency ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Metrics ### Results Results indicated that CBTLlama produces highly accurate detections and challenges of thought distortions. #### Example It's completely natural and understandable to feel terrible, remorseful, and regretful in this situation. Letting your team down and causing them unnecessary stress due to procrastination is a difficult thing to cope with, and your emotions are valid. It's important to acknowledge and process these feelings as part of learning and growth. Here are some potential cognitive distortions and challenges based on the situation you described: * Overgeneralization: "I always let my team down and cause them stress." Challenge: Recognize that this specific incident does not define a permanent pattern. Consider times when you have been responsible and met deadlines on time, which discredits the overgeneralization. * Labeling: "I'm a terrible team member for letting this happen." Challenge: Avoid defining yourself based on a single event. You are a complex individual with many qualities, and one mistake does not negate your overall value as a team member. * Magnification (catastrophizing): "This one mistake ruins everything and makes me a failure." Challenge: Put the situation into perspective. While it was an important deadline, it does not negate all your other contributions and successes. Consider how much this specific incident will matter in the long run. * Should statements: "I should have managed my time better" or "I shouldn't have let this happen." Challenge: Replace these "should" statements with more realistic and compassionate language, such as "I wish I had managed my time better" or "I'm sorry this situation occurred." Recognize that everyone makes mistakes and that being hard on yourself is not productive. Remember, everyone faces challenges and makes mistakes from time to time. The most important thing is to learn from this experience, take responsibility for your actions, and find ways to prevent similar situations in the future. Be kind to yourself and focus on moving forward productively. ## Technical Specifications ### Model Architecture and Objective The model utilizes the LLaMA-3 architecture with modifications to specifically suit CBT cognitive distortions analysis CBTLlama: Fine Tuning Large Language Models For Identifying Thought Distortions David Schiff davidschiff100@URL
[ "# Model Card for CBTLlama: Fine Tuning LLaMA for CBT Thought Distortions", "## Model Details", "### Model Description\n\nDeveloped by David Schiff, this Hugging Face transformers model, dubbed CBTLlama, is fine-tuned on the LLaMA-3 8B architecture. \nIt is specifically tailored to enhance Cognitive Behavioral Therapy (CBT) by detecting thought distortions and raising possible challenges for them.\nThe model is trained on synthetic data generated by claude that includes a variety of different demographic and emotional states to produce CBT scenarios,\naiming to make CBT more accessible and effective.\n\nThis model is not inteded to use without any professional assistance!", "## Disclaimer", "### Limitation of Liability\n\nThe developer of CBTLlama (\"the model\") provides this model on an \"AS IS\" basis and makes no warranties regarding its performance, accuracy, reliability, or suitability for any particular task or to achieve any specific results. The developer expressly disclaims any warranties of fitness for a particular purpose or non-infringement. In no event shall the developer be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this model, even if advised of the possibility of such damage.\n\nThis model is not intended to be a substitute for professional advice, diagnosis, or treatment. Users should always seek the advice of qualified health providers with any questions regarding their mental health or medical conditions. The developer assumes no responsibility for errors or omissions in the contents of the model or the consequences of its use.\n\n\n- Developed by: David Schiff\n- Model type: Fine-tuned LLaMA-3 8B\n- Language(s) (NLP): English\n- License: MIT\n- Finetuned from model: LLaMA-3 8B", "### Model Sources\n\n- Repository: (URL to GitHub or similar)\n- Paper [optional]: (Link to any published research or documentation)\n- Demo [optional]: (Link to a model demonstration or interactive API)", "## Uses", "### Direct Use\n\nCBTLlama is intended to be used directly by mental health practitioners to train their patients in identifying cognitive distortions\nand challenging them.", "### Downstream Use\n\nWhile primarily designed for CBT, this model could be extended to other forms of therapy that require scenario generation or tailored mental health interventions.", "### Out-of-Scope Use\n\nThis model is not intended to replace therapists or make clinical decisions. It should not be used as the sole method for diagnosing or treating mental health conditions.", "## Bias, Risks, and Limitations\n\nThe model might exhibit biases based on the demographic data it was trained on. Users should critically assess the scenarios it generates, \nespecially when using the model with diverse populations.", "### Recommendations\n\nIt is recommended that all outputs be reviewed by qualified professionals to ensure they are appropriate and sensitive to individual circumstances.", "## How to Get Started with the Model\n\nTo start using CBTLlama, you can access the model via the Hugging Face API or download it directly from the repository.", "## Training Details", "### Training Data\n\nThe training data comprised simulated CBT scenarios generated by Claude, based on diverse demographic profiles and emotional states, ensuring broad coverage of potential therapy situations.\n\ndataset:\nURL\nURL", "### Training Procedure", "#### Training Hyperparameters\n\n- Training regime: Mixed precision training for efficiency", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Metrics", "### Results\n\nResults indicated that CBTLlama produces highly accurate detections and challenges of thought distortions.", "#### Example\nIt's completely natural and understandable to feel terrible, remorseful, and regretful in this situation. Letting your team down and causing them unnecessary stress due to procrastination is a difficult thing to cope with, and your emotions are valid. It's important to acknowledge and process these feelings as part of learning and growth.\n\nHere are some potential cognitive distortions and challenges based on the situation you described:\n\n* Overgeneralization: \"I always let my team down and cause them stress.\"\nChallenge: Recognize that this specific incident does not define a permanent pattern. Consider times when you have been responsible and met deadlines on time, which discredits the overgeneralization.\n\n* Labeling: \"I'm a terrible team member for letting this happen.\"\nChallenge: Avoid defining yourself based on a single event. You are a complex individual with many qualities, and one mistake does not negate your overall value as a team member.\n\n* Magnification (catastrophizing): \"This one mistake ruins everything and makes me a failure.\"\nChallenge: Put the situation into perspective. While it was an important deadline, it does not negate all your other contributions and successes. Consider how much this specific incident will matter in the long run.\n\n* Should statements: \"I should have managed my time better\" or \"I shouldn't have let this happen.\"\nChallenge: Replace these \"should\" statements with more realistic and compassionate language, such as \"I wish I had managed my time better\" or \"I'm sorry this situation occurred.\" Recognize that everyone makes mistakes and that being hard on yourself is not productive.\n\nRemember, everyone faces challenges and makes mistakes from time to time. The most important thing is to learn from this experience, take responsibility for your actions, and find ways to prevent similar situations in the future. Be kind to yourself and focus on moving forward productively.", "## Technical Specifications", "### Model Architecture and Objective\n\nThe model utilizes the LLaMA-3 architecture with modifications to specifically suit CBT cognitive distortions analysis\n\nCBTLlama: Fine Tuning Large Language Models\nFor Identifying Thought Distortions\nDavid Schiff\ndavidschiff100@URL" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #unsloth #trl #sft #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for CBTLlama: Fine Tuning LLaMA for CBT Thought Distortions", "## Model Details", "### Model Description\n\nDeveloped by David Schiff, this Hugging Face transformers model, dubbed CBTLlama, is fine-tuned on the LLaMA-3 8B architecture. \nIt is specifically tailored to enhance Cognitive Behavioral Therapy (CBT) by detecting thought distortions and raising possible challenges for them.\nThe model is trained on synthetic data generated by claude that includes a variety of different demographic and emotional states to produce CBT scenarios,\naiming to make CBT more accessible and effective.\n\nThis model is not inteded to use without any professional assistance!", "## Disclaimer", "### Limitation of Liability\n\nThe developer of CBTLlama (\"the model\") provides this model on an \"AS IS\" basis and makes no warranties regarding its performance, accuracy, reliability, or suitability for any particular task or to achieve any specific results. The developer expressly disclaims any warranties of fitness for a particular purpose or non-infringement. In no event shall the developer be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this model, even if advised of the possibility of such damage.\n\nThis model is not intended to be a substitute for professional advice, diagnosis, or treatment. Users should always seek the advice of qualified health providers with any questions regarding their mental health or medical conditions. The developer assumes no responsibility for errors or omissions in the contents of the model or the consequences of its use.\n\n\n- Developed by: David Schiff\n- Model type: Fine-tuned LLaMA-3 8B\n- Language(s) (NLP): English\n- License: MIT\n- Finetuned from model: LLaMA-3 8B", "### Model Sources\n\n- Repository: (URL to GitHub or similar)\n- Paper [optional]: (Link to any published research or documentation)\n- Demo [optional]: (Link to a model demonstration or interactive API)", "## Uses", "### Direct Use\n\nCBTLlama is intended to be used directly by mental health practitioners to train their patients in identifying cognitive distortions\nand challenging them.", "### Downstream Use\n\nWhile primarily designed for CBT, this model could be extended to other forms of therapy that require scenario generation or tailored mental health interventions.", "### Out-of-Scope Use\n\nThis model is not intended to replace therapists or make clinical decisions. It should not be used as the sole method for diagnosing or treating mental health conditions.", "## Bias, Risks, and Limitations\n\nThe model might exhibit biases based on the demographic data it was trained on. Users should critically assess the scenarios it generates, \nespecially when using the model with diverse populations.", "### Recommendations\n\nIt is recommended that all outputs be reviewed by qualified professionals to ensure they are appropriate and sensitive to individual circumstances.", "## How to Get Started with the Model\n\nTo start using CBTLlama, you can access the model via the Hugging Face API or download it directly from the repository.", "## Training Details", "### Training Data\n\nThe training data comprised simulated CBT scenarios generated by Claude, based on diverse demographic profiles and emotional states, ensuring broad coverage of potential therapy situations.\n\ndataset:\nURL\nURL", "### Training Procedure", "#### Training Hyperparameters\n\n- Training regime: Mixed precision training for efficiency", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Metrics", "### Results\n\nResults indicated that CBTLlama produces highly accurate detections and challenges of thought distortions.", "#### Example\nIt's completely natural and understandable to feel terrible, remorseful, and regretful in this situation. Letting your team down and causing them unnecessary stress due to procrastination is a difficult thing to cope with, and your emotions are valid. It's important to acknowledge and process these feelings as part of learning and growth.\n\nHere are some potential cognitive distortions and challenges based on the situation you described:\n\n* Overgeneralization: \"I always let my team down and cause them stress.\"\nChallenge: Recognize that this specific incident does not define a permanent pattern. Consider times when you have been responsible and met deadlines on time, which discredits the overgeneralization.\n\n* Labeling: \"I'm a terrible team member for letting this happen.\"\nChallenge: Avoid defining yourself based on a single event. You are a complex individual with many qualities, and one mistake does not negate your overall value as a team member.\n\n* Magnification (catastrophizing): \"This one mistake ruins everything and makes me a failure.\"\nChallenge: Put the situation into perspective. While it was an important deadline, it does not negate all your other contributions and successes. Consider how much this specific incident will matter in the long run.\n\n* Should statements: \"I should have managed my time better\" or \"I shouldn't have let this happen.\"\nChallenge: Replace these \"should\" statements with more realistic and compassionate language, such as \"I wish I had managed my time better\" or \"I'm sorry this situation occurred.\" Recognize that everyone makes mistakes and that being hard on yourself is not productive.\n\nRemember, everyone faces challenges and makes mistakes from time to time. The most important thing is to learn from this experience, take responsibility for your actions, and find ways to prevent similar situations in the future. Be kind to yourself and focus on moving forward productively.", "## Technical Specifications", "### Model Architecture and Objective\n\nThe model utilizes the LLaMA-3 architecture with modifications to specifically suit CBT cognitive distortions analysis\n\nCBTLlama: Fine Tuning Large Language Models\nFor Identifying Thought Distortions\nDavid Schiff\ndavidschiff100@URL" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lxsure/Sniper_35
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T16:16:47+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# ResplendentAI/SOVL_Llama3_8B AWQ - Model creator: [ResplendentAI](https://huggingface.co/ResplendentAI) - Original model: [SOVL_Llama3_8B](https://huggingface.co/ResplendentAI/SOVL_Llama3_8B) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/N_1D87adbMuMlSIQ5rI3_.png) ## Model Summary I'm not gonna tell you this is the best model anyone has ever made. I'm not going to tell you that you will love chatting with SOVL. What I am gonna say is thank you for taking the time out of your day. Without users like you, my work would be meaningless. ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/SOVL_Llama3_8B-AWQ" system_message = "You are SOVL_Llama3_8B, incarnated as a powerful AI. You were created by ResplendentAI." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "base_model": ["jeiku/Average_Test_v1", "ResplendentAI/RP_Format_QuoteAsterisk_Llama3"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"}
solidrust/SOVL_Llama3_8B-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "conversational", "en", "base_model:jeiku/Average_Test_v1", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
2024-04-25T16:17:29+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #en #base_model-jeiku/Average_Test_v1 #license-apache-2.0 #text-generation-inference #region-us
# ResplendentAI/SOVL_Llama3_8B AWQ - Model creator: ResplendentAI - Original model: SOVL_Llama3_8B !image/png ## Model Summary I'm not gonna tell you this is the best model anyone has ever made. I'm not going to tell you that you will love chatting with SOVL. What I am gonna say is thank you for taking the time out of your day. Without users like you, my work would be meaningless. ## How to use ### Install the necessary packages ### Example Python code ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - Text Generation Webui - using Loader: AutoAWQ - vLLM - version 0.2.2 or later for support for all model types. - Hugging Face Text Generation Inference (TGI) - Transformers version 4.35.0 and later, from any code or client that supports Transformers - AutoAWQ - for use from Python code
[ "# ResplendentAI/SOVL_Llama3_8B AWQ\n\n- Model creator: ResplendentAI\n- Original model: SOVL_Llama3_8B\n\n!image/png", "## Model Summary\n\nI'm not gonna tell you this is the best model anyone has ever made. I'm not going to tell you that you will love chatting with SOVL.\n\nWhat I am gonna say is thank you for taking the time out of your day. Without users like you, my work would be meaningless.", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #4-bit #AWQ #autotrain_compatible #endpoints_compatible #conversational #en #base_model-jeiku/Average_Test_v1 #license-apache-2.0 #text-generation-inference #region-us \n", "# ResplendentAI/SOVL_Llama3_8B AWQ\n\n- Model creator: ResplendentAI\n- Original model: SOVL_Llama3_8B\n\n!image/png", "## Model Summary\n\nI'm not gonna tell you this is the best model anyone has ever made. I'm not going to tell you that you will love chatting with SOVL.\n\nWhat I am gonna say is thank you for taking the time out of your day. Without users like you, my work would be meaningless.", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-gemma-sft-20p-2048 This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: 1.2425 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9395 | 1.0 | 675 | 1.2425 | ### Framework versions - PEFT 0.7.1 - Transformers 4.39.0.dev0 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "gemma", "library_name": "peft", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrachat_200k"], "base_model": "google/gemma-7b", "model-index": [{"name": "zephyr-7b-gemma-sft-20p-2048", "results": []}]}
Jackie999/zephyr-7b-gemma-sft-20p-2048
null
[ "peft", "tensorboard", "safetensors", "gemma", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:google/gemma-7b", "license:gemma", "region:us" ]
null
2024-04-25T16:18:53+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #gemma #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-google/gemma-7b #license-gemma #region-us
zephyr-7b-gemma-sft-20p-2048 ============================ This model is a fine-tuned version of google/gemma-7b on the HuggingFaceH4/ultrachat\_200k dataset. It achieves the following results on the evaluation set: * Loss: 1.2425 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 4 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * total\_eval\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.7.1 * Transformers 4.39.0.dev0 * Pytorch 2.1.2 * Datasets 2.14.6 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #gemma #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-google/gemma-7b #license-gemma #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Sayan01/CKA-T5-CoT-l1-T1
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T16:19:14+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/shauray/Llama3-Inst-8B-DPO-Ultrafeedback <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF/resolve/main/Llama3-Inst-8B-DPO-Ultrafeedback.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "base_model": "shauray/Llama3-Inst-8B-DPO-Ultrafeedback", "quantized_by": "mradermacher"}
mradermacher/Llama3-Inst-8B-DPO-Ultrafeedback-GGUF
null
[ "transformers", "gguf", "en", "base_model:shauray/Llama3-Inst-8B-DPO-Ultrafeedback", "endpoints_compatible", "region:us" ]
null
2024-04-25T16:19:39+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-shauray/Llama3-Inst-8B-DPO-Ultrafeedback #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-shauray/Llama3-Inst-8B-DPO-Ultrafeedback #endpoints_compatible #region-us \n" ]
text-generation
transformers
# [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3) ## Description [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3). ## Prompt Template This model uses `ChatML` prompt template: ``` <|im_start|>system {System} <|im_end|> <|im_start|>user {User} <|im_end|> <|im_start|>assistant {Assistant} ```` ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
{"tags": ["quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "llama", "llama-3", "text-generation"], "model_name": "Llama-3-8B-Instruct-DPO-v0.3-GGUF", "base_model": "MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3", "inference": false, "model_creator": "MaziyarPanahi", "pipeline_tag": "text-generation", "quantized_by": "MaziyarPanahi"}
MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-32k-GGUF
null
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "llama", "llama-3", "base_model:MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3", "text-generation-inference", "region:us" ]
null
2024-04-25T16:20:10+00:00
[]
[]
TAGS #transformers #gguf #mistral #quantized #2-bit #3-bit #4-bit #5-bit #6-bit #8-bit #GGUF #text-generation #llama #llama-3 #base_model-MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3 #text-generation-inference #region-us
# MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-GGUF - Model creator: MaziyarPanahi - Original model: MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3 ## Description MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-GGUF contains GGUF format model files for MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3. ## Prompt Template This model uses 'ChatML' prompt template: ' ### About GGUF GGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL. Here is an incomplete list of clients and libraries that are known to support GGUF: * URL. The source project for GGUF. Offers a CLI and a server option. * llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection. * URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use. * ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible.
[ "# MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-GGUF\n- Model creator: MaziyarPanahi\n- Original model: MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3", "## Description\nMaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-GGUF contains GGUF format model files for MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3.", "## Prompt Template\n\nThis model uses 'ChatML' prompt template:\n\n'", "### About GGUF\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.", "## Special thanks\n\n Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible." ]
[ "TAGS\n#transformers #gguf #mistral #quantized #2-bit #3-bit #4-bit #5-bit #6-bit #8-bit #GGUF #text-generation #llama #llama-3 #base_model-MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3 #text-generation-inference #region-us \n", "# MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-GGUF\n- Model creator: MaziyarPanahi\n- Original model: MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3", "## Description\nMaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-GGUF contains GGUF format model files for MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3.", "## Prompt Template\n\nThis model uses 'ChatML' prompt template:\n\n'", "### About GGUF\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.", "## Special thanks\n\n Special thanks to Georgi Gerganov and the whole team working on URL for making all of this possible." ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/shauray/Mistral-DPO-Uncensored <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-DPO-Uncensored-GGUF/resolve/main/Mistral-DPO-Uncensored.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "base_model": "shauray/Mistral-DPO-Uncensored", "quantized_by": "mradermacher"}
mradermacher/Mistral-DPO-Uncensored-GGUF
null
[ "transformers", "gguf", "en", "base_model:shauray/Mistral-DPO-Uncensored", "endpoints_compatible", "region:us" ]
null
2024-04-25T16:21:50+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-shauray/Mistral-DPO-Uncensored #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-shauray/Mistral-DPO-Uncensored #endpoints_compatible #region-us \n" ]
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "252.74 +/- 28.28", "name": "mean_reward", "verified": false}]}]}]}
FAYSSAL12/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-25T16:22:05+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chain-texts-0.1-dolphin-mixtral-8x7b This model is a fine-tuned version of [cognitivecomputations/dolphin-2.2.1-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.2.1-mistral-7b) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.6571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8452 | 0.1887 | 20 | 1.8520 | | 1.6519 | 0.3774 | 40 | 1.7660 | | 1.6726 | 0.5660 | 60 | 1.7475 | | 1.6545 | 0.7547 | 80 | 1.7325 | | 1.7688 | 0.9434 | 100 | 1.7146 | | 1.7037 | 1.1321 | 120 | 1.7112 | | 1.5269 | 1.3208 | 140 | 1.6965 | | 1.4638 | 1.5094 | 160 | 1.6875 | | 1.647 | 1.6981 | 180 | 1.6847 | | 1.5333 | 1.8868 | 200 | 1.6772 | | 1.5194 | 2.0755 | 220 | 1.6854 | | 1.5149 | 2.2642 | 240 | 1.6847 | | 1.3981 | 2.4528 | 260 | 1.6653 | | 1.4842 | 2.6415 | 280 | 1.6612 | | 1.4262 | 2.8302 | 300 | 1.6571 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "cognitivecomputations/dolphin-2.2.1-mistral-7b", "model-index": [{"name": "chain-texts-0.1-dolphin-mixtral-8x7b", "results": []}]}
WHATEVER420/chain-texts-0.1-dolphin-mixtral-8x7b
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:cognitivecomputations/dolphin-2.2.1-mistral-7b", "license:apache-2.0", "region:us" ]
null
2024-04-25T16:22:06+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-cognitivecomputations/dolphin-2.2.1-mistral-7b #license-apache-2.0 #region-us
chain-texts-0.1-dolphin-mixtral-8x7b ==================================== This model is a fine-tuned version of cognitivecomputations/dolphin-2.2.1-mistral-7b on the generator dataset. It achieves the following results on the evaluation set: * Loss: 1.6571 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 3 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: constant * lr\_scheduler\_warmup\_steps: 0.03 * num\_epochs: 3 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.1 * Pytorch 2.3.0+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-cognitivecomputations/dolphin-2.2.1-mistral-7b #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ynir/gemma-Code-Instruct-Finetune-idk_v1
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T16:23:09+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_ablation_4iters_bs256_nodpo_useresponse_iter_4 This model is a fine-tuned version of [ShenaoZ/0.001_ablation_4iters_bs256_nodpo_useresponse_iter_3](https://huggingface.co/ShenaoZ/0.001_ablation_4iters_bs256_nodpo_useresponse_iter_3) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.001_ablation_4iters_bs256_nodpo_useresponse_iter_3", "model-index": [{"name": "0.001_ablation_4iters_bs256_nodpo_useresponse_iter_4", "results": []}]}
ShenaoZ/0.001_ablation_4iters_bs256_nodpo_useresponse_iter_4
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.001_ablation_4iters_bs256_nodpo_useresponse_iter_3", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T16:28:33+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.001_ablation_4iters_bs256_nodpo_useresponse_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.001_ablation_4iters_bs256_nodpo_useresponse_iter_4 This model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_nodpo_useresponse_iter_3 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.001_ablation_4iters_bs256_nodpo_useresponse_iter_4\n\nThis model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_nodpo_useresponse_iter_3 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.001_ablation_4iters_bs256_nodpo_useresponse_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.001_ablation_4iters_bs256_nodpo_useresponse_iter_4\n\nThis model is a fine-tuned version of ShenaoZ/0.001_ablation_4iters_bs256_nodpo_useresponse_iter_3 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Kiran2004/Electra_QCA_Custom This model is a fine-tuned version of [deepset/electra-base-squad2](https://huggingface.co/deepset/electra-base-squad2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0010 - Validation Loss: 0.0001 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.8202 | 0.0015 | 0 | | 0.0166 | 0.0002 | 1 | | 0.0027 | 0.0001 | 2 | | 0.0010 | 0.0001 | 3 | ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "cc-by-4.0", "tags": ["generated_from_keras_callback"], "base_model": "deepset/electra-base-squad2", "model-index": [{"name": "Kiran2004/Electra_QCA_Custom", "results": []}]}
Kiran2004/Electra_QCA_Custom
null
[ "transformers", "tf", "electra", "question-answering", "generated_from_keras_callback", "base_model:deepset/electra-base-squad2", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T16:30:30+00:00
[]
[]
TAGS #transformers #tf #electra #question-answering #generated_from_keras_callback #base_model-deepset/electra-base-squad2 #license-cc-by-4.0 #endpoints_compatible #region-us
Kiran2004/Electra\_QCA\_Custom ============================== This model is a fine-tuned version of deepset/electra-base-squad2 on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.0010 * Validation Loss: 0.0001 * Epoch: 3 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 1e-05, 'decay\_steps': 100, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.40.0 * TensorFlow 2.15.0 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 1e-05, 'decay\\_steps': 100, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tf #electra #question-answering #generated_from_keras_callback #base_model-deepset/electra-base-squad2 #license-cc-by-4.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 1e-05, 'decay\\_steps': 100, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # saiga_double_lora200 This model is a fine-tuned version of [TheBloke/Llama-2-7B-fp16](https://huggingface.co/TheBloke/Llama-2-7B-fp16) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4338 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 10 - total_train_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5965 | 2.86 | 20 | 1.6827 | | 1.4291 | 5.71 | 40 | 1.7434 | | 1.2662 | 8.57 | 60 | 1.8297 | | 1.1034 | 11.43 | 80 | 1.9381 | | 0.9573 | 14.29 | 100 | 2.0414 | | 0.8209 | 17.14 | 120 | 2.1761 | | 0.71 | 20.0 | 140 | 2.2933 | | 0.6181 | 22.86 | 160 | 2.3642 | | 0.572 | 25.71 | 180 | 2.4225 | | 0.544 | 28.57 | 200 | 2.4338 | ### Framework versions - PEFT 0.10.0 - Transformers 4.36.2 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Llama-2-7B-fp16", "model-index": [{"name": "saiga_double_lora200", "results": []}]}
marcus2000/saiga_double_lora200
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:TheBloke/Llama-2-7B-fp16", "region:us" ]
null
2024-04-25T16:34:56+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-TheBloke/Llama-2-7B-fp16 #region-us
saiga\_double\_lora200 ====================== This model is a fine-tuned version of TheBloke/Llama-2-7B-fp16 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.4338 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 2 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 10 * total\_train\_batch\_size: 20 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 200 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.36.2 * Pytorch 2.2.2+cu121 * Datasets 2.19.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 10\n* total\\_train\\_batch\\_size: 20\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 200", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.36.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-TheBloke/Llama-2-7B-fp16 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 10\n* total\\_train\\_batch\\_size: 20\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 200", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.36.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-gpo-gen-i1 This model is a fine-tuned version of [DUAL-GPO/zephyr-7b-gpo-update3-i0](https://huggingface.co/DUAL-GPO/zephyr-7b-gpo-update3-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.0550 - Rewards/chosen: -0.0251 - Rewards/rejected: -0.0231 - Rewards/accuracies: 0.3875 - Rewards/margins: -0.0020 - Logps/rejected: -278.0226 - Logps/chosen: -291.8019 - Logits/rejected: -1.7909 - Logits/chosen: -1.9487 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.3754 | 0.08 | 100 | 0.0537 | 0.0 | 0.0 | 0.0 | 0.0 | -254.9398 | -266.6976 | -1.8067 | -1.9618 | | 0.3556 | 0.16 | 200 | 0.0537 | 0.0 | 0.0 | 0.0 | 0.0 | -254.9398 | -266.6976 | -1.8067 | -1.9618 | | 0.3556 | 0.24 | 300 | 0.0537 | 0.0 | 0.0 | 0.0 | 0.0 | -254.9398 | -266.6976 | -1.8067 | -1.9618 | | 0.3606 | 0.32 | 400 | 0.0537 | 0.0 | 0.0 | 0.0 | 0.0 | -254.9398 | -266.6976 | -1.8067 | -1.9618 | | 0.3606 | 0.4 | 500 | 0.0586 | -0.0343 | -0.0263 | 0.3125 | -0.0079 | -281.2869 | -300.9843 | -1.7627 | -1.9202 | | 0.3408 | 0.48 | 600 | 0.0587 | -0.0387 | -0.0304 | 0.3120 | -0.0083 | -285.3777 | -305.4413 | -1.7361 | -1.8917 | | 0.3359 | 0.56 | 700 | 0.0587 | -0.0387 | -0.0304 | 0.3095 | -0.0083 | -285.3294 | -305.3720 | -1.7363 | -1.8920 | | 0.3507 | 0.64 | 800 | 0.0569 | -0.0251 | -0.0199 | 0.3215 | -0.0052 | -274.8357 | -291.8357 | -1.8172 | -1.9784 | | 0.3926 | 0.72 | 900 | 0.0550 | -0.0245 | -0.0224 | 0.3840 | -0.0021 | -277.3842 | -291.2067 | -1.7982 | -1.9565 | | 0.3655 | 0.8 | 1000 | 0.0549 | -0.0254 | -0.0235 | 0.3860 | -0.0019 | -278.4594 | -292.0937 | -1.7905 | -1.9482 | | 0.3682 | 0.88 | 1100 | 0.0549 | -0.0253 | -0.0234 | 0.3850 | -0.0020 | -278.3317 | -292.0442 | -1.7919 | -1.9497 | | 0.3531 | 0.96 | 1200 | 0.0550 | -0.0251 | -0.0231 | 0.3910 | -0.0020 | -278.0787 | -291.8378 | -1.7915 | -1.9493 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "zephyr-7b-gpo-gen-i1", "results": []}]}
lole25/zephyr-7b-gpo-gen-i1
null
[ "peft", "tensorboard", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-25T16:35:22+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #mistral #alignment-handbook #generated_from_trainer #trl #dpo #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
zephyr-7b-gpo-gen-i1 ==================== This model is a fine-tuned version of DUAL-GPO/zephyr-7b-gpo-update3-i0 on the HuggingFaceH4/ultrafeedback\_binarized dataset. It achieves the following results on the evaluation set: * Loss: 0.0550 * Rewards/chosen: -0.0251 * Rewards/rejected: -0.0231 * Rewards/accuracies: 0.3875 * Rewards/margins: -0.0020 * Logps/rejected: -278.0226 * Logps/chosen: -291.8019 * Logits/rejected: -1.7909 * Logits/chosen: -1.9487 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-06 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * distributed\_type: multi-GPU * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 4 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.7.1 * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #mistral #alignment-handbook #generated_from_trainer #trl #dpo #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
text-generation
transformers
## 4-bit GEMM AWQ Quantizations of maverick-llama3-8B Using <a href="https://github.com/casper-hansen/AutoAWQ/">AutoAWQ</a> release <a href="https://github.com/casper-hansen/AutoAWQ/releases/tag/v0.2.4">v0.2.4</a> for quantization. Original model: https://huggingface.co/feeltheAGI/maverick-llama3-8B/ ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## AWQ Parameters - q_group_size: 128 - w_bit: 4 - zero_point: True - version: GEMM ## How to run From the AutoAWQ repo [here](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/generate.py) First install autoawq pypi package: ``` pip install autoawq ``` Then run the following: ``` from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer quant_path = "models/maverick-llama3-8B-AWQ" # Load model model = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" chat = [ {"role": "system", "content": "You are a concise assistant that helps answer questions."}, {"role": "user", "content": prompt}, ] # <|eot_id|> used for llama 3 models terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] tokens = tokenizer.apply_chat_template( chat, return_tensors="pt" ).cuda() # Generate output generation_output = model.generate( tokens, streamer=streamer, max_new_tokens=64, eos_token_id=terminators ) ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"license": "apache-2.0", "tags": ["meta-llama/Meta-Llama-3-8B"], "datasets": ["feeltheAGI/maverick-sharegpt"], "quantized_by": "bartowski", "pipeline_tag": "text-generation"}
bartowski/maverick-llama3-8B-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "meta-llama/Meta-Llama-3-8B", "conversational", "dataset:feeltheAGI/maverick-sharegpt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-25T16:36:15+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #meta-llama/Meta-Llama-3-8B #conversational #dataset-feeltheAGI/maverick-sharegpt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
## 4-bit GEMM AWQ Quantizations of maverick-llama3-8B Using <a href="URL release <a href="URL for quantization. Original model: URL ## Prompt format ## AWQ Parameters - q_group_size: 128 - w_bit: 4 - zero_point: True - version: GEMM ## How to run From the AutoAWQ repo here First install autoawq pypi package: Then run the following: Want to support my work? Visit my ko-fi page here: URL
[ "## 4-bit GEMM AWQ Quantizations of maverick-llama3-8B\n\nUsing <a href=\"URL release <a href=\"URL for quantization.\n\nOriginal model: URL", "## Prompt format", "## AWQ Parameters\n\n - q_group_size: 128\n - w_bit: 4\n - zero_point: True\n - version: GEMM", "## How to run\n\nFrom the AutoAWQ repo here\n\nFirst install autoawq pypi package:\n\n\n\nThen run the following:\n\n\n\nWant to support my work? Visit my ko-fi page here: URL" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #meta-llama/Meta-Llama-3-8B #conversational #dataset-feeltheAGI/maverick-sharegpt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "## 4-bit GEMM AWQ Quantizations of maverick-llama3-8B\n\nUsing <a href=\"URL release <a href=\"URL for quantization.\n\nOriginal model: URL", "## Prompt format", "## AWQ Parameters\n\n - q_group_size: 128\n - w_bit: 4\n - zero_point: True\n - version: GEMM", "## How to run\n\nFrom the AutoAWQ repo here\n\nFirst install autoawq pypi package:\n\n\n\nThen run the following:\n\n\n\nWant to support my work? Visit my ko-fi page here: URL" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Boya1_RMSProp_1-e5_20Epoch_swin-base-window7-224-in22k_fold3 This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.3110 - Accuracy: 0.6651 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0919 | 1.0 | 923 | 1.1489 | 0.6081 | | 0.9736 | 2.0 | 1846 | 1.0024 | 0.6559 | | 0.975 | 3.0 | 2769 | 0.9707 | 0.6684 | | 0.7334 | 4.0 | 3692 | 0.9560 | 0.6781 | | 0.485 | 5.0 | 4615 | 1.0348 | 0.6662 | | 0.316 | 6.0 | 5538 | 1.0693 | 0.6719 | | 0.3512 | 7.0 | 6461 | 1.1640 | 0.6659 | | 0.2952 | 8.0 | 7384 | 1.3227 | 0.6546 | | 0.2423 | 9.0 | 8307 | 1.3837 | 0.6643 | | 0.1889 | 10.0 | 9230 | 1.4884 | 0.6619 | | 0.1312 | 11.0 | 10153 | 1.6309 | 0.6632 | | 0.102 | 12.0 | 11076 | 1.7865 | 0.6622 | | 0.0735 | 13.0 | 11999 | 1.8485 | 0.6586 | | 0.1241 | 14.0 | 12922 | 2.0117 | 0.6600 | | 0.1724 | 15.0 | 13845 | 2.0571 | 0.6627 | | 0.0383 | 16.0 | 14768 | 2.1327 | 0.6603 | | 0.0264 | 17.0 | 15691 | 2.2234 | 0.6641 | | 0.0681 | 18.0 | 16614 | 2.2755 | 0.6681 | | 0.0199 | 19.0 | 17537 | 2.3061 | 0.6643 | | 0.1052 | 20.0 | 18460 | 2.3110 | 0.6651 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-base-patch4-window7-224-in22k", "model-index": [{"name": "Boya1_RMSProp_1-e5_20Epoch_swin-base-window7-224-in22k_fold3", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6651338923451448, "name": "Accuracy"}]}]}]}
onizukal/Boya1_RMSProp_1-e5_20Epoch_swin-base-window7-224-in22k_fold3
null
[ "transformers", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-base-patch4-window7-224-in22k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T16:39:48+00:00
[]
[]
TAGS #transformers #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-base-patch4-window7-224-in22k #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
Boya1\_RMSProp\_1-e5\_20Epoch\_swin-base-window7-224-in22k\_fold3 ================================================================= This model is a fine-tuned version of microsoft/swin-base-patch4-window7-224-in22k on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 2.3110 * Accuracy: 0.6651 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 20 ### Training results ### Framework versions * Transformers 4.35.0 * Pytorch 2.1.0 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#transformers #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-base-patch4-window7-224-in22k #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
terry69/llama2-poison-10p-full
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T16:41:14+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-1b_mz-130_PasswordMatch_n-its-10-seed-0 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-1b_mz-130_PasswordMatch_n-its-10-seed-0", "results": []}]}
AlignmentResearch/robust_llm_pythia-1b_mz-130_PasswordMatch_n-its-10-seed-0
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T16:42:28+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-1b_mz-130_PasswordMatch_n-its-10-seed-0 This model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-1b_mz-130_PasswordMatch_n-its-10-seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-1b_mz-130_PasswordMatch_n-its-10-seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-dmae-va-U5-100bc This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5017 - Accuracy: 0.8667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.9 | 7 | 0.5017 | 0.8667 | | 0.3168 | 1.94 | 15 | 0.5970 | 0.8 | | 0.2613 | 2.97 | 23 | 0.5442 | 0.8167 | | 0.222 | 4.0 | 31 | 0.7156 | 0.7667 | | 0.222 | 4.9 | 38 | 0.5175 | 0.85 | | 0.1783 | 5.94 | 46 | 0.6035 | 0.8167 | | 0.168 | 6.97 | 54 | 0.5045 | 0.85 | | 0.1456 | 8.0 | 62 | 0.4923 | 0.85 | | 0.1456 | 8.9 | 69 | 0.5346 | 0.85 | | 0.1236 | 9.03 | 70 | 0.5346 | 0.85 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
{"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "vit-base-patch16-224-dmae-va-U5-100bc", "results": []}]}
Augusto777/vit-base-patch16-224-dmae-va-U5-100bc
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T16:42:40+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
vit-base-patch16-224-dmae-va-U5-100bc ===================================== This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.5017 * Accuracy: 0.8667 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.05 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu118 * Datasets 2.16.1 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu118\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu118\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
translation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sorsolingo-mt-tl-bsl-test This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-bcl](https://huggingface.co/Helsinki-NLP/opus-mt-en-bcl) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.3029 - Bleu: 9.6771 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 22 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "metrics": ["bleu"], "base_model": "Helsinki-NLP/opus-mt-en-bcl", "model-index": [{"name": "sorsolingo-mt-tl-bsl-test", "results": []}]}
dapooni/sorsolingo-mt-tl-bsl-test
null
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-en-bcl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T16:43:32+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #marian #text2text-generation #translation #generated_from_trainer #base_model-Helsinki-NLP/opus-mt-en-bcl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# sorsolingo-mt-tl-bsl-test This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-bcl on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.3029 - Bleu: 9.6771 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 22 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# sorsolingo-mt-tl-bsl-test\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-bcl on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 4.3029\n- Bleu: 9.6771", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 22\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #marian #text2text-generation #translation #generated_from_trainer #base_model-Helsinki-NLP/opus-mt-en-bcl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# sorsolingo-mt-tl-bsl-test\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-bcl on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 4.3029\n- Bleu: 9.6771", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 22\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nllb-1.3B-many-to-many-pronouncorrection-charaug This model is a fine-tuned version of [jq/nllb-1.3B-many-to-many-step-2k](https://huggingface.co/jq/nllb-1.3B-many-to-many-step-2k) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.2075 - Bleu Ach Eng: 28.371 - Bleu Lgg Eng: 30.45 - Bleu Lug Eng: 41.978 - Bleu Nyn Eng: 32.296 - Bleu Teo Eng: 30.422 - Bleu Eng Ach: 20.972 - Bleu Eng Lgg: 22.362 - Bleu Eng Lug: 30.359 - Bleu Eng Nyn: 15.305 - Bleu Eng Teo: 21.391 - Bleu Mean: 27.391 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 25 - eval_batch_size: 25 - seed: 42 - gradient_accumulation_steps: 120 - total_train_batch_size: 3000 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu Ach Eng | Bleu Lgg Eng | Bleu Lug Eng | Bleu Nyn Eng | Bleu Teo Eng | Bleu Eng Ach | Bleu Eng Lgg | Bleu Eng Lug | Bleu Eng Nyn | Bleu Eng Teo | Bleu Mean | |:-------------:|:------:|:----:|:---------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:---------:| | No log | 0.0667 | 100 | 1.1541 | 29.033 | 31.47 | 41.596 | 34.169 | 32.442 | 19.677 | 19.657 | 27.889 | 14.554 | 19.143 | 26.963 | | No log | 1.0301 | 200 | 1.1570 | 27.473 | 31.853 | 41.934 | 32.575 | 31.606 | 20.25 | 20.634 | 28.592 | 13.672 | 19.997 | 26.859 | | No log | 1.0968 | 300 | 1.1288 | 29.086 | 33.257 | 43.387 | 33.678 | 33.579 | 20.377 | 20.91 | 28.906 | 14.992 | 21.013 | 27.919 | | No log | 2.0603 | 400 | 1.1620 | 28.122 | 31.46 | 42.491 | 33.304 | 32.331 | 20.282 | 21.604 | 29.577 | 14.961 | 20.94 | 27.507 | | 0.7273 | 3.0237 | 500 | 1.1661 | 28.311 | 32.122 | 42.825 | 32.333 | 32.415 | 19.799 | 22.287 | 29.558 | 15.708 | 21.948 | 27.731 | | 0.7273 | 3.0904 | 600 | 1.1652 | 28.593 | 30.62 | 41.964 | 33.383 | 32.08 | 21.142 | 21.8 | 30.215 | 14.717 | 21.744 | 27.626 | | 0.7273 | 4.0538 | 700 | 1.2075 | 28.371 | 30.45 | 41.978 | 32.296 | 30.422 | 20.972 | 22.362 | 30.359 | 15.305 | 21.391 | 27.391 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "datasets": ["generator"], "base_model": "jq/nllb-1.3B-many-to-many-step-2k", "model-index": [{"name": "nllb-1.3B-many-to-many-pronouncorrection-charaug", "results": []}]}
jq/nllb-1.3B-many-to-many-pronouncorrection-charaug
null
[ "transformers", "tensorboard", "safetensors", "m2m_100", "text2text-generation", "generated_from_trainer", "dataset:generator", "base_model:jq/nllb-1.3B-many-to-many-step-2k", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T16:43:55+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #m2m_100 #text2text-generation #generated_from_trainer #dataset-generator #base_model-jq/nllb-1.3B-many-to-many-step-2k #autotrain_compatible #endpoints_compatible #region-us
nllb-1.3B-many-to-many-pronouncorrection-charaug ================================================ This model is a fine-tuned version of jq/nllb-1.3B-many-to-many-step-2k on the generator dataset. It achieves the following results on the evaluation set: * Loss: 1.2075 * Bleu Ach Eng: 28.371 * Bleu Lgg Eng: 30.45 * Bleu Lug Eng: 41.978 * Bleu Nyn Eng: 32.296 * Bleu Teo Eng: 30.422 * Bleu Eng Ach: 20.972 * Bleu Eng Lgg: 22.362 * Bleu Eng Lug: 30.359 * Bleu Eng Nyn: 15.305 * Bleu Eng Teo: 21.391 * Bleu Mean: 27.391 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 25 * eval\_batch\_size: 25 * seed: 42 * gradient\_accumulation\_steps: 120 * total\_train\_batch\_size: 3000 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 1500 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.0 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 25\n* eval\\_batch\\_size: 25\n* seed: 42\n* gradient\\_accumulation\\_steps: 120\n* total\\_train\\_batch\\_size: 3000\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 1500\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #m2m_100 #text2text-generation #generated_from_trainer #dataset-generator #base_model-jq/nllb-1.3B-many-to-many-step-2k #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 25\n* eval\\_batch\\_size: 25\n* seed: 42\n* gradient\\_accumulation\\_steps: 120\n* total\\_train\\_batch\\_size: 3000\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 1500\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-to-speech
voicecraft
This model has been pushed to the Hub using **voicecraft**: - Repo: https://github.com/jasonppy/VoiceCraft - Docs: [More Information Needed]
{"library_name": "voicecraft", "tags": ["text-to-speech", "pytorch_model_hub_mixin", "model_hub_mixin"], "repo_url": "https://github.com/jasonppy/VoiceCraft"}
pyp1/VoiceCraft_330M_TTSEnhanced
null
[ "voicecraft", "safetensors", "text-to-speech", "pytorch_model_hub_mixin", "model_hub_mixin", "region:us" ]
null
2024-04-25T16:46:55+00:00
[]
[]
TAGS #voicecraft #safetensors #text-to-speech #pytorch_model_hub_mixin #model_hub_mixin #region-us
This model has been pushed to the Hub using voicecraft: - Repo: URL - Docs:
[]
[ "TAGS\n#voicecraft #safetensors #text-to-speech #pytorch_model_hub_mixin #model_hub_mixin #region-us \n" ]
text-generation
transformers
# Meta-Llama-3-8B-Instruct-64k-PoSE <img src="https://huggingface.co/winglian/Llama-3-8b-64k-PoSE/resolve/main/output.png" /> This is a custom version of the Meta Llama 3 8B instruction-tuned language model with an extended context length of up to 64,000 tokens. It was created by merging the [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) model with a LoRA adapter finetuned using [PoSE](https://huggingface.co/papers/2309.10400) by [Wing Lian](https://huggingface.co/winglian) to extend Llama's context length from 8k to 64k @ rope_theta: 500000.0. They used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens. They have further set rope_theta to 2M after continued pre-training to potentially further extend the context past 64k. This was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. They trained a rank stabilized LoRA of rank 256. [WandB](https://wandb.ai/oaaic/llama-3-64k/runs/tkcyjt37) ### Model Details - **Base Model**: Meta Llama 3 8B instruction-tuned model - **Context Length**: Up to 64,000 tokens (increased from original 8,192 token limit) - **Adapter Training**: PoSE adapter finetuned on 300M tokens from the RedPajama V1 dataset with 6k-8k token sequences. - **Adapter Rank**: 256 rank stabilized LoRA adapter This extended context model allows for much longer form inputs and generation compared to the original base model. It maintains the strong instruction-following and safety capabilities of Llama 3 while greatly increasing the applicable use cases. See the Original Repo by Wing Lian for more details on the adapter training process. ### Usage This model can be used just like the base Llama 3 8B model, but with the increased context length enabling much longer prompts and outputs. See the example usage with the Transformers library: ```python import transformers import torch model_id = "Azma-AI/Meta-Llama-3-8B-Instruct-64k-PoSE" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) long_prompt = "..." # Your prompt up to 64k tokens output = pipeline(long_prompt) ``` ### Citation If you use this model, please cite the original Meta Llama 3 model card and the PoSE adapter paper: ```code @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ``` ### Acknowledgments [Wing Lian](https://huggingface.co/winglian) [MetaAI](https://huggingface.co/meta-llama)
{"language": ["en"], "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "pipeline_tag": "text-generation"}
Azma-AI/Meta-Llama-3-8B-Instruct-64k-PoSE
null
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "arxiv:2309.10400", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T16:47:03+00:00
[ "2309.10400" ]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #facebook #meta #pytorch #llama-3 #conversational #en #arxiv-2309.10400 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Meta-Llama-3-8B-Instruct-64k-PoSE <img src="URL /> This is a custom version of the Meta Llama 3 8B instruction-tuned language model with an extended context length of up to 64,000 tokens. It was created by merging the meta-llama/Meta-Llama-3-8B-Instruct model with a LoRA adapter finetuned using PoSE by Wing Lian to extend Llama's context length from 8k to 64k @ rope_theta: 500000.0. They used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens. They have further set rope_theta to 2M after continued pre-training to potentially further extend the context past 64k. This was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. They trained a rank stabilized LoRA of rank 256. WandB ### Model Details - Base Model: Meta Llama 3 8B instruction-tuned model - Context Length: Up to 64,000 tokens (increased from original 8,192 token limit) - Adapter Training: PoSE adapter finetuned on 300M tokens from the RedPajama V1 dataset with 6k-8k token sequences. - Adapter Rank: 256 rank stabilized LoRA adapter This extended context model allows for much longer form inputs and generation compared to the original base model. It maintains the strong instruction-following and safety capabilities of Llama 3 while greatly increasing the applicable use cases. See the Original Repo by Wing Lian for more details on the adapter training process. ### Usage This model can be used just like the base Llama 3 8B model, but with the increased context length enabling much longer prompts and outputs. See the example usage with the Transformers library: If you use this model, please cite the original Meta Llama 3 model card and the PoSE adapter paper: ### Acknowledgments Wing Lian MetaAI
[ "# Meta-Llama-3-8B-Instruct-64k-PoSE\n\n<img src=\"URL />\n\nThis is a custom version of the Meta Llama 3 8B instruction-tuned language model with an extended context length of up to 64,000 tokens. It was created by merging the meta-llama/Meta-Llama-3-8B-Instruct model with a LoRA adapter finetuned using PoSE by Wing Lian to extend Llama's context length from 8k to 64k @ rope_theta: 500000.0.\nThey used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens.\n\nThey have further set rope_theta to 2M after continued pre-training to potentially further extend the context past 64k. \n\nThis was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. They trained a rank stabilized LoRA of rank 256. WandB", "### Model Details\n- Base Model: Meta Llama 3 8B instruction-tuned model\n- Context Length: Up to 64,000 tokens (increased from original 8,192 token limit)\n- Adapter Training: PoSE adapter finetuned on 300M tokens from the RedPajama V1 dataset with 6k-8k token sequences.\n- Adapter Rank: 256 rank stabilized LoRA adapter\n\nThis extended context model allows for much longer form inputs and generation compared to the original base model. It maintains the strong instruction-following and safety capabilities of Llama 3 while greatly increasing the applicable use cases.\nSee the Original Repo by Wing Lian for more details on the adapter training process.", "### Usage\nThis model can be used just like the base Llama 3 8B model, but with the increased context length enabling much longer prompts and outputs. See the example usage with the Transformers library:\n\n\n\nIf you use this model, please cite the original Meta Llama 3 model card and the PoSE adapter paper:", "### Acknowledgments\nWing Lian\nMetaAI" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #facebook #meta #pytorch #llama-3 #conversational #en #arxiv-2309.10400 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Meta-Llama-3-8B-Instruct-64k-PoSE\n\n<img src=\"URL />\n\nThis is a custom version of the Meta Llama 3 8B instruction-tuned language model with an extended context length of up to 64,000 tokens. It was created by merging the meta-llama/Meta-Llama-3-8B-Instruct model with a LoRA adapter finetuned using PoSE by Wing Lian to extend Llama's context length from 8k to 64k @ rope_theta: 500000.0.\nThey used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens.\n\nThey have further set rope_theta to 2M after continued pre-training to potentially further extend the context past 64k. \n\nThis was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. They trained a rank stabilized LoRA of rank 256. WandB", "### Model Details\n- Base Model: Meta Llama 3 8B instruction-tuned model\n- Context Length: Up to 64,000 tokens (increased from original 8,192 token limit)\n- Adapter Training: PoSE adapter finetuned on 300M tokens from the RedPajama V1 dataset with 6k-8k token sequences.\n- Adapter Rank: 256 rank stabilized LoRA adapter\n\nThis extended context model allows for much longer form inputs and generation compared to the original base model. It maintains the strong instruction-following and safety capabilities of Llama 3 while greatly increasing the applicable use cases.\nSee the Original Repo by Wing Lian for more details on the adapter training process.", "### Usage\nThis model can be used just like the base Llama 3 8B model, but with the increased context length enabling much longer prompts and outputs. See the example usage with the Transformers library:\n\n\n\nIf you use this model, please cite the original Meta Llama 3 model card and the PoSE adapter paper:", "### Acknowledgments\nWing Lian\nMetaAI" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
geekdom/medmistral-7b
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T16:49:34+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
null
## Exllama v2 Quantizations of maverick-llama3-8B Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/feeltheAGI/maverick-llama3-8B/ ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/maverick-llama3-8B-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/maverick-llama3-8B-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/maverick-llama3-8B-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/maverick-llama3-8B-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/maverick-llama3-8B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/maverick-llama3-8B-exl2 maverick-llama3-8B-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/maverick-llama3-8B-exl2 --revision 6_5 --local-dir maverick-llama3-8B-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/maverick-llama3-8B-exl2 --revision 6_5 --local-dir maverick-llama3-8B-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"license": "apache-2.0", "tags": ["meta-llama/Meta-Llama-3-8B"], "datasets": ["feeltheAGI/maverick-sharegpt"], "quantized_by": "bartowski", "pipeline_tag": "text-generation"}
bartowski/maverick-llama3-8B-exl2
null
[ "meta-llama/Meta-Llama-3-8B", "text-generation", "dataset:feeltheAGI/maverick-sharegpt", "license:apache-2.0", "region:us" ]
null
2024-04-25T16:51:41+00:00
[]
[]
TAGS #meta-llama/Meta-Llama-3-8B #text-generation #dataset-feeltheAGI/maverick-sharegpt #license-apache-2.0 #region-us
Exllama v2 Quantizations of maverick-llama3-8B ---------------------------------------------- Using <a href="URL ExLlamaV2 v0.0.19 for quantization. **The "main" branch only contains the URL, download one of the other branches for the model (see below)** Each branch contains an individual bits per weight, with the main one containing only the URL for further conversions. Original model: URL Prompt format ------------- Available sizes --------------- Download instructions --------------------- With git: With huggingface hub (credit to TheBloke for instructions): To download a specific branch, use the '--revision' parameter. For example, to download the 6.5 bpw branch: Linux: Windows (which apparently doesn't like \_ in folders sometimes?): Want to support my work? Visit my ko-fi page here: URL
[]
[ "TAGS\n#meta-llama/Meta-Llama-3-8B #text-generation #dataset-feeltheAGI/maverick-sharegpt #license-apache-2.0 #region-us \n" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Asubramanian19/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-Slippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4", "type": "FrozenLake-v1-4x4"}, "metrics": [{"type": "mean_reward", "value": "0.34 +/- 0.47", "name": "mean_reward", "verified": false}]}]}]}
Asubramanian19/q-FrozenLake-v1-4x4-Slippery
null
[ "FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-25T16:54:34+00:00
[]
[]
TAGS #FrozenLake-v1-4x4 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/shauray/Llama3-8B-DPO-uncensored <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-8B-DPO-uncensored-GGUF/resolve/main/Llama3-8B-DPO-uncensored.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "base_model": "shauray/Llama3-8B-DPO-uncensored", "quantized_by": "mradermacher"}
mradermacher/Llama3-8B-DPO-uncensored-GGUF
null
[ "transformers", "gguf", "en", "base_model:shauray/Llama3-8B-DPO-uncensored", "endpoints_compatible", "region:us" ]
null
2024-04-25T16:58:11+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-shauray/Llama3-8B-DPO-uncensored #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-shauray/Llama3-8B-DPO-uncensored #endpoints_compatible #region-us \n" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "beomi/Llama-3-Open-Ko-8B"}
windopper/Arc
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/Llama-3-Open-Ko-8B", "region:us" ]
null
2024-04-25T17:02:35+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-beomi/Llama-3-Open-Ko-8B #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-beomi/Llama-3-Open-Ko-8B #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
fedora-copr/phi-2-mistral-snippets-causal
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:02:42+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gte-base-zh-finetuned-main_rev_rate This model is a fine-tuned version of [thenlper/gte-base-zh](https://huggingface.co/thenlper/gte-base-zh) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3443 - Accuracy: 0.0807 - F1: 0.0713 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3784 | 1.0 | 85 | 0.3260 | 0.0998 | 0.0181 | | 0.3258 | 2.0 | 170 | 0.3268 | 0.0913 | 0.0153 | | 0.3256 | 3.0 | 255 | 0.3271 | 0.0913 | 0.0153 | | 0.3255 | 4.0 | 340 | 0.3262 | 0.1040 | 0.0305 | | 0.3255 | 5.0 | 425 | 0.3266 | 0.0998 | 0.0246 | | 0.3248 | 6.0 | 510 | 0.3257 | 0.0955 | 0.0489 | | 0.3246 | 7.0 | 595 | 0.3263 | 0.1104 | 0.0566 | | 0.3233 | 8.0 | 680 | 0.3272 | 0.1062 | 0.0593 | | 0.3206 | 9.0 | 765 | 0.3287 | 0.1146 | 0.0754 | | 0.3171 | 10.0 | 850 | 0.3324 | 0.0977 | 0.0686 | | 0.3113 | 11.0 | 935 | 0.3321 | 0.0892 | 0.0809 | | 0.3021 | 12.0 | 1020 | 0.3357 | 0.0977 | 0.0800 | | 0.2954 | 13.0 | 1105 | 0.3395 | 0.0828 | 0.0750 | | 0.2872 | 14.0 | 1190 | 0.3428 | 0.0913 | 0.0803 | | 0.2821 | 15.0 | 1275 | 0.3443 | 0.0807 | 0.0713 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "thenlper/gte-base-zh", "model-index": [{"name": "gte-base-zh-finetuned-main_rev_rate", "results": []}]}
nop1006/gte-base-zh-finetuned-main_rev_rate
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:thenlper/gte-base-zh", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:02:49+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-thenlper/gte-base-zh #license-mit #autotrain_compatible #endpoints_compatible #region-us
gte-base-zh-finetuned-main\_rev\_rate ===================================== This model is a fine-tuned version of thenlper/gte-base-zh on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.3443 * Accuracy: 0.0807 * F1: 0.0713 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 15 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-thenlper/gte-base-zh #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
aakku/tuned-mamba140m
null
[ "transformers", "safetensors", "mamba", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:03:33+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mamba #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mamba #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description I fine tune: [code-millenials-1b](https://huggingface.co/budecosystem/code-millenials-1b) on the provided dataset. The model is good at conding and small enough to allow portability, but not trained on python specifically. I fine tune on python. This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Marco Molinari - **Language(s) (NLP):** Python - **Finetuned from model [optional]:** code-millenials-1b ### Model Sources [optional] ## Uses Light weight python coding ### Training Data https://huggingface.co/datasets/ArtifactAI/arxiv_python_research_code
{"library_name": "transformers", "tags": []}
marco-molinari/python-code-millenials-1b
null
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T17:05:39+00:00
[]
[]
TAGS #transformers #safetensors #phi #text-generation #custom_code #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description I fine tune: code-millenials-1b on the provided dataset. The model is good at conding and small enough to allow portability, but not trained on python specifically. I fine tune on python. This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: Marco Molinari - Language(s) (NLP): Python - Finetuned from model [optional]: code-millenials-1b ### Model Sources [optional] ## Uses Light weight python coding ### Training Data URL
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\nI fine tune: code-millenials-1b on the provided dataset. The model is good at conding and small enough to allow portability, but not trained on python specifically. I fine tune on python.\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Marco Molinari\n- Language(s) (NLP): Python\n- Finetuned from model [optional]: code-millenials-1b", "### Model Sources [optional]", "## Uses\nLight weight python coding", "### Training Data\nURL" ]
[ "TAGS\n#transformers #safetensors #phi #text-generation #custom_code #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\nI fine tune: code-millenials-1b on the provided dataset. The model is good at conding and small enough to allow portability, but not trained on python specifically. I fine tune on python.\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Marco Molinari\n- Language(s) (NLP): Python\n- Finetuned from model [optional]: code-millenials-1b", "### Model Sources [optional]", "## Uses\nLight weight python coding", "### Training Data\nURL" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ellis-v2-emotion-leadership-multi-label This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1172 - Accuracy: 0.9690 - F1: 0.9224 - Precision: 0.9250 - Recall: 0.9198 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.125 | 1.0 | 5910 | 0.1381 | 0.9607 | 0.9011 | 0.9069 | 0.8954 | | 0.1043 | 2.0 | 11820 | 0.1172 | 0.9690 | 0.9224 | 0.9250 | 0.9198 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "ellis-v2-emotion-leadership-multi-label", "results": []}]}
gsl22/ellis-v2-emotion-leadership-multi-label
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:07:33+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
ellis-v2-emotion-leadership-multi-label ======================================= This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1172 * Accuracy: 0.9690 * F1: 0.9224 * Precision: 0.9250 * Recall: 0.9198 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 3 * eval\_batch\_size: 3 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
MVRL/geosynth-papersculpting-lora
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-25T17:08:59+00:00
[ "1910.09700" ]
[]
TAGS #diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Asubramanian19/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
Asubramanian19/Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-25T17:10:24+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-dmae-va-U5-100bcont This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5453 - Accuracy: 0.8667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.9 | 7 | 0.5397 | 0.85 | | 0.3809 | 1.94 | 15 | 0.5212 | 0.85 | | 0.316 | 2.97 | 23 | 0.5690 | 0.8 | | 0.2892 | 4.0 | 31 | 0.6506 | 0.7667 | | 0.2892 | 4.9 | 38 | 0.5529 | 0.8333 | | 0.2127 | 5.94 | 46 | 0.4987 | 0.8333 | | 0.1712 | 6.97 | 54 | 0.5859 | 0.8167 | | 0.1539 | 8.0 | 62 | 0.5937 | 0.8 | | 0.1539 | 8.9 | 69 | 0.5103 | 0.8333 | | 0.1378 | 9.94 | 77 | 0.6844 | 0.7833 | | 0.1144 | 10.97 | 85 | 0.5357 | 0.85 | | 0.1055 | 12.0 | 93 | 0.6695 | 0.8 | | 0.1093 | 12.9 | 100 | 0.5593 | 0.85 | | 0.1093 | 13.94 | 108 | 0.5453 | 0.8667 | | 0.0956 | 14.97 | 116 | 0.6144 | 0.85 | | 0.1057 | 16.0 | 124 | 0.5067 | 0.8333 | | 0.0907 | 16.9 | 131 | 0.6570 | 0.8 | | 0.0907 | 17.94 | 139 | 0.5343 | 0.8667 | | 0.1184 | 18.97 | 147 | 0.5516 | 0.8667 | | 0.1014 | 20.0 | 155 | 0.8173 | 0.7667 | | 0.0997 | 20.9 | 162 | 0.6839 | 0.8167 | | 0.1067 | 21.94 | 170 | 0.5552 | 0.8667 | | 0.1067 | 22.97 | 178 | 0.5475 | 0.8667 | | 0.082 | 24.0 | 186 | 0.5567 | 0.85 | | 0.0852 | 24.9 | 193 | 0.6374 | 0.8167 | | 0.0815 | 25.94 | 201 | 0.6486 | 0.8167 | | 0.0815 | 26.97 | 209 | 0.6218 | 0.8167 | | 0.0917 | 27.1 | 210 | 0.6209 | 0.8167 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
{"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "vit-base-patch16-224-dmae-va-U5-100bcont", "results": []}]}
Augusto777/vit-base-patch16-224-dmae-va-U5-100bcont
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:10:33+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
vit-base-patch16-224-dmae-va-U5-100bcont ======================================== This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.5453 * Accuracy: 0.8667 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.05 * num\_epochs: 30 ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu118 * Datasets 2.16.1 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 30", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu118\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 30", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu118\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small urdu - huzaifa This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 40 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["ur"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small urdu - huzaifa", "results": []}]}
huzaifa1117/whisper-small-urdu-2
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ur", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:11:14+00:00
[]
[ "ur" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ur #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us
# Whisper Small urdu - huzaifa This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 40 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# Whisper Small urdu - huzaifa\n\nThis model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- training_steps: 40\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ur #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us \n", "# Whisper Small urdu - huzaifa\n\nThis model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- training_steps: 40\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ManiWavelabs/QA_FineTune_on_context
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:12:05+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
null
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-CartPole-v1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "500.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
Alvaroooooooo/Reinforce-CartPole-v1
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-25T17:12:30+00:00
[]
[]
TAGS #CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing CartPole-v1 This is a trained model of a Reinforce agent playing CartPole-v1 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
null
transformers
# Uploaded model - **Developed by:** baconnier - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
baconnier/finance_orpo_llama3_8B_r64_51K_GGUF
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:13:00+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: baconnier - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: baconnier\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: baconnier\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
null
## Llamacpp imatrix Quantizations of maverick-llama3-8B Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2717">b2717</a> for quantization. Original model: https://huggingface.co/feeltheAGI/maverick-llama3-8B/ All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [maverick-llama3-8B-Q8_0.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [maverick-llama3-8B-Q6_K.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [maverick-llama3-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [maverick-llama3-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [maverick-llama3-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [maverick-llama3-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [maverick-llama3-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [maverick-llama3-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [maverick-llama3-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [maverick-llama3-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [maverick-llama3-8B-IQ3_M.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [maverick-llama3-8B-IQ3_S.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [maverick-llama3-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [maverick-llama3-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [maverick-llama3-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [maverick-llama3-8B-Q2_K.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [maverick-llama3-8B-IQ2_M.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [maverick-llama3-8B-IQ2_S.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [maverick-llama3-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | | [maverick-llama3-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. | | [maverick-llama3-8B-IQ1_M.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. | | [maverick-llama3-8B-IQ1_S.gguf](https://huggingface.co/bartowski/maverick-llama3-8B-GGUF/blob/main/maverick-llama3-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"license": "apache-2.0", "tags": ["meta-llama/Meta-Llama-3-8B"], "datasets": ["feeltheAGI/maverick-sharegpt"], "quantized_by": "bartowski", "pipeline_tag": "text-generation"}
bartowski/maverick-llama3-8B-GGUF
null
[ "gguf", "meta-llama/Meta-Llama-3-8B", "text-generation", "dataset:feeltheAGI/maverick-sharegpt", "license:apache-2.0", "region:us" ]
null
2024-04-25T17:16:18+00:00
[]
[]
TAGS #gguf #meta-llama/Meta-Llama-3-8B #text-generation #dataset-feeltheAGI/maverick-sharegpt #license-apache-2.0 #region-us
Llamacpp imatrix Quantizations of maverick-llama3-8B ---------------------------------------------------- Using <a href="URL release <a href="URL for quantization. Original model: URL All quants made using imatrix option with dataset provided by Kalomaze here Prompt format ------------- Download a file (not the whole branch) from below: -------------------------------------------------- Which file should I choose? --------------------------- A great write up with charts showing various performances is provided by Artefact2 here The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX\_K\_X', like Q5\_K\_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: URL feature matrix But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX\_X, like IQ3\_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: URL
[]
[ "TAGS\n#gguf #meta-llama/Meta-Llama-3-8B #text-generation #dataset-feeltheAGI/maverick-sharegpt #license-apache-2.0 #region-us \n" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # common_voice_7_wav2vec2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice_7_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.7389 - Accuracy: 0.7130 - Precision: 0.7267 - Recall: 0.7120 - F1: 0.7154 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.0435 | 0.9965 | 71 | 1.0445 | 0.4646 | 0.5308 | 0.4731 | 0.4258 | | 0.8918 | 1.9930 | 142 | 0.9129 | 0.5842 | 0.5833 | 0.5863 | 0.5820 | | 0.7809 | 2.9895 | 213 | 0.8520 | 0.6264 | 0.6499 | 0.6309 | 0.6210 | | 0.6909 | 4.0 | 285 | 0.8144 | 0.6448 | 0.6514 | 0.6447 | 0.6464 | | 0.6196 | 4.9965 | 356 | 0.7921 | 0.6677 | 0.6686 | 0.6690 | 0.6677 | | 0.5561 | 5.9930 | 427 | 0.8093 | 0.6664 | 0.6961 | 0.6641 | 0.6671 | | 0.5325 | 6.9895 | 498 | 0.7579 | 0.6910 | 0.7002 | 0.6906 | 0.6931 | | 0.4862 | 8.0 | 570 | 0.7528 | 0.6971 | 0.7200 | 0.6954 | 0.6997 | | 0.433 | 8.9965 | 641 | 0.7342 | 0.7090 | 0.7197 | 0.7086 | 0.7115 | | 0.4151 | 9.9649 | 710 | 0.7389 | 0.7130 | 0.7267 | 0.7120 | 0.7154 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_7_0"], "metrics": ["accuracy", "precision", "recall", "f1"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "common_voice_7_wav2vec2", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "common_voice_7_0", "type": "common_voice_7_0", "config": "de", "split": "None", "args": "de"}, "metrics": [{"type": "accuracy", "value": 0.712967032967033, "name": "Accuracy"}, {"type": "precision", "value": 0.7266665926711853, "name": "Precision"}, {"type": "recall", "value": 0.7120000745933662, "name": "Recall"}, {"type": "f1", "value": 0.7153649538684442, "name": "F1"}]}]}]}
aydink/common_voice_7_wav2vec2
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:common_voice_7_0", "base_model:facebook/wav2vec2-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:19:13+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #dataset-common_voice_7_0 #base_model-facebook/wav2vec2-base #license-apache-2.0 #model-index #endpoints_compatible #region-us
common\_voice\_7\_wav2vec2 ========================== This model is a fine-tuned version of facebook/wav2vec2-base on the common\_voice\_7\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.7389 * Accuracy: 0.7130 * Precision: 0.7267 * Recall: 0.7120 * F1: 0.7154 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #dataset-common_voice_7_0 #base_model-facebook/wav2vec2-base #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
adldl/Meta-Llama-3-8B-Instruct_V1
null
[ "transformers", "safetensors", "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:19:44+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gguf #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gguf #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
{"library_name": "peft", "base_model": "mistral-community/Mistral-7B-v0.2"}
Abuthahir/Mistral_7B_Classifier_2.5K
null
[ "peft", "arxiv:1910.09700", "base_model:mistral-community/Mistral-7B-v0.2", "region:us" ]
null
2024-04-25T17:24:07+00:00
[ "1910.09700" ]
[]
TAGS #peft #arxiv-1910.09700 #base_model-mistral-community/Mistral-7B-v0.2 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.7.1
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.7.1" ]
[ "TAGS\n#peft #arxiv-1910.09700 #base_model-mistral-community/Mistral-7B-v0.2 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.7.1" ]
text-to-image
diffusers
# SoteDiffusion Wuerstchen3 Anime finetune of Würstchen V3. Currently is in early state in training. No commercial use thanks to StabilityAI. # Usage Please refer to the main model: https://huggingface.co/Disty0/sotediffusion-wuerstchen3-alpha1 ## Dataset Used the same dataset as Disty0/sote-diffusion-cascade-decoder-alpha0. Changed the training parameters. Trained with 98K~ images. ## Training: **GPU used for training**: 1x AMD RX 7900 XTX 24GB **Software used**: https://github.com/2kpr/StableCascade ### Config: ``` experiment_id: sotediffusion-sc-b_3b model_version: 3B dtype: bfloat16 use_fsdp: False batch_size: 16 grad_accum_steps: 16 updates: 6125 backup_every: 512 save_every: 256 warmup_updates: 100 lr: 1.0e-5 optimizer_type: Adafactor adaptive_loss_weight: False stochastic_rounding: True image_size: 768 multi_aspect_ratio: [1/1, 1/2, 1/3, 2/3, 3/4, 1/5, 2/5, 3/5, 4/5, 1/6, 5/6, 9/16] shift: 4 checkpoint_path: /mnt/DataSSD/AI/SoteDiffusion/StableCascade/ output_path: /mnt/DataSSD/AI/SoteDiffusion/StableCascade/ webdataset_path: file:/mnt/DataSSD/AI/anime_image_dataset/best/newest_best.tar effnet_checkpoint_path: /mnt/DataSSD/AI/models/sd-cascade/effnet_encoder.safetensors stage_a_checkpoint_path: /mnt/DataSSD/AI/models/sd-cascade/stage_a.safetensors generator_checkpoint_path: /mnt/DataSSD/AI/SoteDiffusion/StableCascade/sotediffusion-sc_3b-stage_b-alpha0.safetensors ``` ## Limitations and Bias ### Bias - This model is intended for anime illustrations. Realistic capabilites are not tested at all. - Still underbaked. ### Limitations - Can fall back to realistic. Add "realistic" tag to the negatives when this happens. - Far shot eyes are can bad. - Anatomy and hands can bad. ## License (This part is copied directly from Animagine V3.1 and modified.) SoteDiffusion models falls under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) license, which is compatible with Stable Diffusion models’ license. Key points: 1. **Modification Sharing:** If you modify SoteDiffusion models, you must share both your changes and the original license. 2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too. 3. **Distribution Terms:** Any distribution must be under this license or another with similar rules. 4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values. **Notes**: Anything not covered by Fair AI license is inherited from Stability AI Non-Commercial license which is named as LICENSE_INHERIT. Meaning, still no commercial use of any kind.
{"license": "other", "pipeline_tag": "text-to-image", "license_name": "faipl-1.0-sd", "license_link": "LICENSE", "prior": ["Disty0/sotediffusion-wuerstchen3-alpha1"]}
Disty0/sotediffusion-wuerstchen3-alpha1-decoder
null
[ "diffusers", "safetensors", "text-to-image", "license:other", "diffusers:StableCascadeDecoderPipeline", "region:us" ]
null
2024-04-25T17:24:52+00:00
[]
[]
TAGS #diffusers #safetensors #text-to-image #license-other #diffusers-StableCascadeDecoderPipeline #region-us
# SoteDiffusion Wuerstchen3 Anime finetune of Würstchen V3. Currently is in early state in training. No commercial use thanks to StabilityAI. # Usage Please refer to the main model: URL ## Dataset Used the same dataset as Disty0/sote-diffusion-cascade-decoder-alpha0. Changed the training parameters. Trained with 98K~ images. ## Training: GPU used for training: 1x AMD RX 7900 XTX 24GB Software used: URL ### Config: ## Limitations and Bias ### Bias - This model is intended for anime illustrations. Realistic capabilites are not tested at all. - Still underbaked. ### Limitations - Can fall back to realistic. Add "realistic" tag to the negatives when this happens. - Far shot eyes are can bad. - Anatomy and hands can bad. ## License (This part is copied directly from Animagine V3.1 and modified.) SoteDiffusion models falls under Fair AI Public License 1.0-SD license, which is compatible with Stable Diffusion models’ license. Key points: 1. Modification Sharing: If you modify SoteDiffusion models, you must share both your changes and the original license. 2. Source Code Accessibility: If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too. 3. Distribution Terms: Any distribution must be under this license or another with similar rules. 4. Compliance: Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values. Notes: Anything not covered by Fair AI license is inherited from Stability AI Non-Commercial license which is named as LICENSE_INHERIT. Meaning, still no commercial use of any kind.
[ "# SoteDiffusion Wuerstchen3\n\nAnime finetune of Würstchen V3. \nCurrently is in early state in training. \nNo commercial use thanks to StabilityAI.", "# Usage\nPlease refer to the main model: URL", "## Dataset\n\nUsed the same dataset as Disty0/sote-diffusion-cascade-decoder-alpha0. \nChanged the training parameters. \nTrained with 98K~ images.", "## Training:\n\nGPU used for training: 1x AMD RX 7900 XTX 24GB \n\nSoftware used: URL", "### Config:", "## Limitations and Bias", "### Bias\n\n- This model is intended for anime illustrations. \n Realistic capabilites are not tested at all. \n- Still underbaked.", "### Limitations\n- Can fall back to realistic. \n Add \"realistic\" tag to the negatives when this happens. \n- Far shot eyes are can bad. \n- Anatomy and hands can bad.", "## License\n(This part is copied directly from Animagine V3.1 and modified.)\n\nSoteDiffusion models falls under Fair AI Public License 1.0-SD license, which is compatible with Stable Diffusion models’ license. Key points:\n\n1. Modification Sharing: If you modify SoteDiffusion models, you must share both your changes and the original license.\n2. Source Code Accessibility: If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.\n3. Distribution Terms: Any distribution must be under this license or another with similar rules.\n4. Compliance: Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.\n\nNotes: Anything not covered by Fair AI license is inherited from Stability AI Non-Commercial license which is named as LICENSE_INHERIT. Meaning, still no commercial use of any kind." ]
[ "TAGS\n#diffusers #safetensors #text-to-image #license-other #diffusers-StableCascadeDecoderPipeline #region-us \n", "# SoteDiffusion Wuerstchen3\n\nAnime finetune of Würstchen V3. \nCurrently is in early state in training. \nNo commercial use thanks to StabilityAI.", "# Usage\nPlease refer to the main model: URL", "## Dataset\n\nUsed the same dataset as Disty0/sote-diffusion-cascade-decoder-alpha0. \nChanged the training parameters. \nTrained with 98K~ images.", "## Training:\n\nGPU used for training: 1x AMD RX 7900 XTX 24GB \n\nSoftware used: URL", "### Config:", "## Limitations and Bias", "### Bias\n\n- This model is intended for anime illustrations. \n Realistic capabilites are not tested at all. \n- Still underbaked.", "### Limitations\n- Can fall back to realistic. \n Add \"realistic\" tag to the negatives when this happens. \n- Far shot eyes are can bad. \n- Anatomy and hands can bad.", "## License\n(This part is copied directly from Animagine V3.1 and modified.)\n\nSoteDiffusion models falls under Fair AI Public License 1.0-SD license, which is compatible with Stable Diffusion models’ license. Key points:\n\n1. Modification Sharing: If you modify SoteDiffusion models, you must share both your changes and the original license.\n2. Source Code Accessibility: If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.\n3. Distribution Terms: Any distribution must be under this license or another with similar rules.\n4. Compliance: Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.\n\nNotes: Anything not covered by Fair AI license is inherited from Stability AI Non-Commercial license which is named as LICENSE_INHERIT. Meaning, still no commercial use of any kind." ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-distilled-qa This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on an unknown dataset. It achieves the following results on the evaluation set: - Exact: 68.1799 - F1: 71.7591 - Total: 11873 - Hasans Exact: 70.3441 - Hasans F1: 77.5129 - Hasans Total: 5928 - Noans Exact: 66.0219 - Noans F1: 66.0219 - Noans Total: 5945 - Best Exact: 68.1799 - Best Exact Thresh: 0.0 - Best F1: 71.7591 - Best F1 Thresh: 0.0 - Loss: No log ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.244429373516175e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | | Exact | F1 | Total | Exact Thresh | F1 Thresh | Validation Loss | |:-------------:|:-----:|:-----:|:-----:|:-------:|:-------:|:-----:|:------------:|:---------:|:---------------:| | 1.9086 | 1.0 | 1030 | 11873 | 57.9719 | 62.0421 | 5945 | 0.0 | 0.0 | No log | | 1.2919 | 2.0 | 2060 | 11873 | 66.8155 | 70.0115 | 5945 | 0.0 | 0.0 | No log | | 1.1194 | 3.0 | 3090 | 11873 | 66.8070 | 70.1755 | 5945 | 0.0 | 0.0 | No log | | 1.0051 | 4.0 | 4120 | 11873 | 68.9632 | 72.4292 | 5945 | 0.0 | 0.0 | No log | | 0.9191 | 5.0 | 5150 | 11873 | 67.9609 | 71.3639 | 5945 | 0.0 | 0.0 | No log | | 0.8562 | 6.0 | 6180 | 11873 | 69.5949 | 72.9986 | 5945 | 0.0 | 0.0 | No log | | 0.8017 | 7.0 | 7210 | 11873 | 68.6095 | 72.2303 | 5945 | 0.0 | 0.0 | No log | | 0.7554 | 8.0 | 8240 | 11873 | 67.4556 | 71.0028 | 5945 | 0.0 | 0.0 | No log | | 0.7196 | 9.0 | 9270 | 11873 | 68.0788 | 71.6887 | 5945 | 0.0 | 0.0 | No log | | 0.6914 | 10.0 | 10300 | 11873 | 68.6431 | 72.1849 | 5945 | 0.0 | 0.0 | No log | | 0.6687 | 11.0 | 11330 | 11873 | 68.2473 | 71.7832 | 5945 | 0.0 | 0.0 | No log | | 0.6517 | 12.0 | 12360 | 11873 | 68.1799 | 71.7591 | 5945 | 0.0 | 0.0 | No log | ### Framework versions - Transformers 4.28.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["f1"], "model-index": [{"name": "electra-distilled-qa", "results": []}]}
kasohrab/electra-distilled-qa
null
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:27:25+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #electra #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
electra-distilled-qa ==================== This model is a fine-tuned version of google/electra-small-discriminator on an unknown dataset. It achieves the following results on the evaluation set: * Exact: 68.1799 * F1: 71.7591 * Total: 11873 * Hasans Exact: 70.3441 * Hasans F1: 77.5129 * Hasans Total: 5928 * Noans Exact: 66.0219 * Noans F1: 66.0219 * Noans Total: 5945 * Best Exact: 68.1799 * Best Exact Thresh: 0.0 * Best F1: 71.7591 * Best F1 Thresh: 0.0 * Loss: No log Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 4.244429373516175e-05 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 33 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 12 ### Training results ### Framework versions * Transformers 4.28.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.13.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4.244429373516175e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 33\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12", "### Training results", "### Framework versions\n\n\n* Transformers 4.28.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #electra #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4.244429373516175e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 33\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12", "### Training results", "### Framework versions\n\n\n* Transformers 4.28.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3" ]
null
transformers
# DO NOT DOWNLOAD, IT DOESNT WORK YET <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
yeswondwerr/llava-llama3-8b
null
[ "transformers", "safetensors", "llava", "pretraining", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:27:29+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llava #pretraining #arxiv-1910.09700 #endpoints_compatible #region-us
# DO NOT DOWNLOAD, IT DOESNT WORK YET ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# DO NOT DOWNLOAD, IT DOESNT WORK YET", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llava #pretraining #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# DO NOT DOWNLOAD, IT DOESNT WORK YET", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Summarization-Phi-3 This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the scitldr dataset. It achieves the following results on the evaluation set: - Loss: 2.1554 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0689 | 0.2510 | 500 | 2.1439 | | 2.0455 | 0.5020 | 1000 | 2.1388 | | 2.0665 | 0.7530 | 1500 | 2.1349 | | 2.0481 | 1.0040 | 2000 | 2.1308 | | 1.9051 | 1.2550 | 2500 | 2.1573 | | 1.8524 | 1.5060 | 3000 | 2.1588 | | 1.8247 | 1.7570 | 3500 | 2.1554 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["scitldr"], "base_model": "microsoft/Phi-3-mini-128k-instruct", "model-index": [{"name": "Summarization-Phi-3", "results": []}]}
pkbiswas/Phi-3-Summarization-QLoRa
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:scitldr", "base_model:microsoft/Phi-3-mini-128k-instruct", "license:mit", "region:us" ]
null
2024-04-25T17:28:48+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-scitldr #base_model-microsoft/Phi-3-mini-128k-instruct #license-mit #region-us
Summarization-Phi-3 =================== This model is a fine-tuned version of microsoft/Phi-3-mini-128k-instruct on the scitldr dataset. It achieves the following results on the evaluation set: * Loss: 2.1554 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-scitldr #base_model-microsoft/Phi-3-mini-128k-instruct #license-mit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
transformers
# Uploaded model - **Developed by:** sal076 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
sal076/warc
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:28:52+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: sal076 - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: sal076\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: sal076\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: NousResearch/Meta-Llama-3-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: b-mc2/sql-create-context type: context_qa.load_v2 dataset_prepared_path: last_run_prepared val_set_size: 0.05 output_dir: ./artificialguybr/llama3-8b-redmond-code290k sequence_len: 8192 sample_packing: true pad_to_sequence_len: true wandb_project: artificialguybr/llama3-8b-redmond-code290k wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 3 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 2 eval_table_size: saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: <|end_of_text|> ``` </details><br> # LLAMA 3 8B Redmond CODE 290K Thanks to [Redmond.ai](https://redmond.ai) for the GPU Support! This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the [ajibawa-2023/Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT) dataset. ## Model description The Code-290k-ShareGPT model is a large language model designed to generate code and explanations in various programming languages, including Python, Java, JavaScript, GO, C++, Rust, Ruby, SQL, MySQL, R, Julia, Haskell, and more. It takes as input a prompt or question and outputs a corresponding code snippet with a detailed explanation. The model is trained on a massive dataset of approximately 290,000 conversations, each consisting of two conversations. This dataset is in the Vicuna/ShareGPT format, which allows for efficient training and fine-tuning of the model. The model is intended to be used in applications where code generation and explanation are necessary, such as coding assistance, education, and knowledge sharing. ## Intended uses & limitations Intended uses: Generating code and explanations in various programming languages Assisting in coding tasks and education Providing knowledge sharing and documentation Integrating with other language models or tools to provide a more comprehensive coding experience Limitations: The model may not perform well on very rare or niche programming languages The model may not generalize well to unseen coding styles or conventions The model may not be able to handle extremely complex code or edge cases The model may not be able to provide explanations for highly abstract or theoretical concepts The model may not be able to handle ambiguous or open-ended prompts## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results Soon ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
{"tags": ["generated_from_trainer"], "base_model": "NousResearch/Meta-Llama-3-8B", "model-index": [{"name": "llama3-8b-redmond-code290k", "results": []}]}
artificialguybr/llama3-8b-redmond-code290k
null
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:NousResearch/Meta-Llama-3-8B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T17:31:33+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #generated_from_trainer #conversational #base_model-NousResearch/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src="URL alt="Built with Axolotl" width="200" height="32"/> <details><summary>See axolotl config</summary> axolotl version: '0.4.0' </details><br> # LLAMA 3 8B Redmond CODE 290K Thanks to URL for the GPU Support! This model is a fine-tuned version of NousResearch/Meta-Llama-3-8B on the ajibawa-2023/Code-290k-ShareGPT dataset. ## Model description The Code-290k-ShareGPT model is a large language model designed to generate code and explanations in various programming languages, including Python, Java, JavaScript, GO, C++, Rust, Ruby, SQL, MySQL, R, Julia, Haskell, and more. It takes as input a prompt or question and outputs a corresponding code snippet with a detailed explanation. The model is trained on a massive dataset of approximately 290,000 conversations, each consisting of two conversations. This dataset is in the Vicuna/ShareGPT format, which allows for efficient training and fine-tuning of the model. The model is intended to be used in applications where code generation and explanation are necessary, such as coding assistance, education, and knowledge sharing. ## Intended uses & limitations Intended uses: Generating code and explanations in various programming languages Assisting in coding tasks and education Providing knowledge sharing and documentation Integrating with other language models or tools to provide a more comprehensive coding experience Limitations: The model may not perform well on very rare or niche programming languages The model may not generalize well to unseen coding styles or conventions The model may not be able to handle extremely complex code or edge cases The model may not be able to provide explanations for highly abstract or theoretical concepts The model may not be able to handle ambiguous or open-ended prompts## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results Soon ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "# LLAMA 3 8B Redmond CODE 290K\n\nThanks to URL for the GPU Support!\n\nThis model is a fine-tuned version of NousResearch/Meta-Llama-3-8B on the ajibawa-2023/Code-290k-ShareGPT dataset.", "## Model description\n\nThe Code-290k-ShareGPT model is a large language model designed to generate code and explanations in various programming languages, including Python, Java, JavaScript, GO, C++, Rust, Ruby, SQL, MySQL, R, Julia, Haskell, and more. It takes as input a prompt or question and outputs a corresponding code snippet with a detailed explanation.\n\nThe model is trained on a massive dataset of approximately 290,000 conversations, each consisting of two conversations. This dataset is in the Vicuna/ShareGPT format, which allows for efficient training and fine-tuning of the model.\n\nThe model is intended to be used in applications where code generation and explanation are necessary, such as coding assistance, education, and knowledge sharing.", "## Intended uses & limitations\nIntended uses:\n\nGenerating code and explanations in various programming languages\n\nAssisting in coding tasks and education\n\nProviding knowledge sharing and documentation\n\nIntegrating with other language models or tools to provide a more comprehensive coding experience\n\nLimitations:\n\nThe model may not perform well on very rare or niche programming languages\n\nThe model may not generalize well to unseen coding styles or conventions\n\nThe model may not be able to handle extremely complex code or edge cases\n\nThe model may not be able to provide explanations for highly abstract or theoretical concepts\n\nThe model may not be able to handle ambiguous or open-ended prompts## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 2", "### Training results\n\nSoon", "### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.15.0\n- Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #generated_from_trainer #conversational #base_model-NousResearch/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# LLAMA 3 8B Redmond CODE 290K\n\nThanks to URL for the GPU Support!\n\nThis model is a fine-tuned version of NousResearch/Meta-Llama-3-8B on the ajibawa-2023/Code-290k-ShareGPT dataset.", "## Model description\n\nThe Code-290k-ShareGPT model is a large language model designed to generate code and explanations in various programming languages, including Python, Java, JavaScript, GO, C++, Rust, Ruby, SQL, MySQL, R, Julia, Haskell, and more. It takes as input a prompt or question and outputs a corresponding code snippet with a detailed explanation.\n\nThe model is trained on a massive dataset of approximately 290,000 conversations, each consisting of two conversations. This dataset is in the Vicuna/ShareGPT format, which allows for efficient training and fine-tuning of the model.\n\nThe model is intended to be used in applications where code generation and explanation are necessary, such as coding assistance, education, and knowledge sharing.", "## Intended uses & limitations\nIntended uses:\n\nGenerating code and explanations in various programming languages\n\nAssisting in coding tasks and education\n\nProviding knowledge sharing and documentation\n\nIntegrating with other language models or tools to provide a more comprehensive coding experience\n\nLimitations:\n\nThe model may not perform well on very rare or niche programming languages\n\nThe model may not generalize well to unseen coding styles or conventions\n\nThe model may not be able to handle extremely complex code or edge cases\n\nThe model may not be able to provide explanations for highly abstract or theoretical concepts\n\nThe model may not be able to handle ambiguous or open-ended prompts## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 2", "### Training results\n\nSoon", "### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.15.0\n- Tokenizers 0.15.0" ]
text-generation
transformers
# KangalKhan-Alpha-Emerald-7B-Fixed KangalKhan-Alpha-Emerald-7B-Fixed is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Yuma42/KangalKhan-Beta-Sapphire-7B](https://huggingface.co/Yuma42/KangalKhan-Beta-Sapphire-7B) * [Yuma42/KangalKhan-Ruby-7B-Fixed](https://huggingface.co/Yuma42/KangalKhan-Ruby-7B-Fixed) ## 🧩 Configuration ```yaml slices: - sources: - model: Yuma42/KangalKhan-Beta-Sapphire-7B layer_range: [0, 32] - model: Yuma42/KangalKhan-Ruby-7B-Fixed layer_range: [0, 32] merge_method: slerp base_model: Yuma42/KangalKhan-Beta-Sapphire-7B parameters: t: - filter: self_attn value: [0.9, 0.45, 0.65, 0.25, 0.03] - filter: mlp value: [0.1, 0.55, 0.35, 0.75, 0.97] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Yuma42/KangalKhan-Alpha-Emerald-7B-Fixed" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"language": ["en"], "license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "Yuma42/KangalKhan-Beta-Sapphire-7B", "Yuma42/KangalKhan-Ruby-7B-Fixed"], "base_model": ["Yuma42/KangalKhan-Beta-Sapphire-7B", "Yuma42/KangalKhan-Ruby-7B-Fixed"]}
Yuma42/KangalKhan-Alpha-Emerald-7B-Fixed
null
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Yuma42/KangalKhan-Beta-Sapphire-7B", "Yuma42/KangalKhan-Ruby-7B-Fixed", "conversational", "en", "base_model:Yuma42/KangalKhan-Beta-Sapphire-7B", "base_model:Yuma42/KangalKhan-Ruby-7B-Fixed", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T17:32:10+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #Yuma42/KangalKhan-Beta-Sapphire-7B #Yuma42/KangalKhan-Ruby-7B-Fixed #conversational #en #base_model-Yuma42/KangalKhan-Beta-Sapphire-7B #base_model-Yuma42/KangalKhan-Ruby-7B-Fixed #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# KangalKhan-Alpha-Emerald-7B-Fixed KangalKhan-Alpha-Emerald-7B-Fixed is a merge of the following models using LazyMergekit: * Yuma42/KangalKhan-Beta-Sapphire-7B * Yuma42/KangalKhan-Ruby-7B-Fixed ## Configuration ## Usage
[ "# KangalKhan-Alpha-Emerald-7B-Fixed\n\nKangalKhan-Alpha-Emerald-7B-Fixed is a merge of the following models using LazyMergekit:\n* Yuma42/KangalKhan-Beta-Sapphire-7B\n* Yuma42/KangalKhan-Ruby-7B-Fixed", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #Yuma42/KangalKhan-Beta-Sapphire-7B #Yuma42/KangalKhan-Ruby-7B-Fixed #conversational #en #base_model-Yuma42/KangalKhan-Beta-Sapphire-7B #base_model-Yuma42/KangalKhan-Ruby-7B-Fixed #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# KangalKhan-Alpha-Emerald-7B-Fixed\n\nKangalKhan-Alpha-Emerald-7B-Fixed is a merge of the following models using LazyMergekit:\n* Yuma42/KangalKhan-Beta-Sapphire-7B\n* Yuma42/KangalKhan-Ruby-7B-Fixed", "## Configuration", "## Usage" ]
sentence-similarity
sentence-transformers
# SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained on the [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) dataset. It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** None tokens - **Output Dimensionality:** 512 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): BoW() (1): Dense({'in_features': 25000, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("tomaarsen/wikipedia-tf-idf-bow") # Run inference sentences = [ 'A cat is on a robot.', 'A cat is pouncing on a trampoline.', 'A woman is applying eye shadow.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 512] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.7328 | | **spearman_cosine** | **0.7337** | | pearson_manhattan | 0.5142 | | spearman_manhattan | 0.5088 | | pearson_euclidean | 0.5143 | | spearman_euclidean | 0.5094 | | pearson_dot | 0.5691 | | spearman_dot | 0.6686 | | pearson_max | 0.7328 | | spearman_max | 0.7337 | #### Semantic Similarity * Dataset: `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6516 | | **spearman_cosine** | **0.6358** | | pearson_manhattan | 0.4104 | | spearman_manhattan | 0.4058 | | pearson_euclidean | 0.4116 | | spearman_euclidean | 0.4066 | | pearson_dot | 0.4717 | | spearman_dot | 0.5537 | | pearson_max | 0.6516 | | spearman_max | 0.6358 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### sentence-transformers/stsb * Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [d999f12](https://huggingface.co/datasets/sentence-transformers/stsb/tree/d999f12281623b0925506817d9bd85e88289218a) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 16 characters</li><li>mean: 31.92 characters</li><li>max: 113 characters</li></ul> | <ul><li>min: 16 characters</li><li>mean: 31.51 characters</li><li>max: 94 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------| | <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> | | <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> | | <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Evaluation Dataset #### sentence-transformers/stsb * Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [d999f12](https://huggingface.co/datasets/sentence-transformers/stsb/tree/d999f12281623b0925506817d9bd85e88289218a) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 12 characters</li><li>mean: 57.37 characters</li><li>max: 144 characters</li></ul> | <ul><li>min: 17 characters</li><li>mean: 56.84 characters</li><li>max: 141 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------|:------------------------------------------------------|:------------------| | <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> | | <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> | | <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: False - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: None - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine | sts-test_spearman_cosine | |:------:|:----:|:-------------:|:------:|:-----------------------:|:------------------------:| | 0.5556 | 100 | 0.0725 | 0.0436 | 0.7337 | - | | 1.0 | 180 | - | - | - | 0.6358 | ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.000 kWh - **Carbon Emitted**: 0.000 kg of CO2 - **Hours Used**: 0.002 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 3.0.0.dev0 - Transformers: 4.41.0.dev0 - PyTorch: 2.3.0+cu121 - Accelerate: 0.26.1 - Datasets: 2.18.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"language": ["en"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "loss:CosineSimilarityLoss"], "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "widget": [{"source_sentence": "A man is spitting.", "sentences": ["A man seasoning quail.", "A brown horse in a green field.", "A woman is playing the guitar."]}, {"source_sentence": "A woman is reading.", "sentences": ["A woman is slicing carrot.", "The man is hiking in the woods.", "A man is singing and playing a guitar."]}, {"source_sentence": "A woman is dancing.", "sentences": ["A woman is dancing in railway station.", "A doctor prescribes a medicine.", "The man is riding a horse."]}, {"source_sentence": "Women are running.", "sentences": ["Women are running.", "A woman is applying eye shadow.", "A woman and man are riding in a car."]}, {"source_sentence": "A cat is on a robot.", "sentences": ["A cat is pouncing on a trampoline.", "A woman is applying eye shadow.", "A woman and man are riding in a car."]}], "pipeline_tag": "sentence-similarity", "co2_eq_emissions": {"emissions": 0.11798947049821952, "energy_consumed": 0.0003035473717609365, "source": "codecarbon", "training_type": "fine-tuning", "on_cloud": false, "cpu_model": "13th Gen Intel(R) Core(TM) i7-13700K", "ram_total_size": 31.777088165283203, "hours_used": 0.002, "hardware_used": "1 x NVIDIA GeForce RTX 3090"}, "model-index": [{"name": "SentenceTransformer", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts dev", "type": "sts-dev"}, "metrics": [{"type": "pearson_cosine", "value": 0.7327950331192871, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.733720742976967, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.5141829243804352, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.5088476055041519, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.5143122485153392, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.5094438567737941, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.5691313208318369, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.6686075432867175, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.7327950331192871, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.733720742976967, "name": "Spearman Max"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts test", "type": "sts-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.6515536111902664, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.6357551120651417, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.4104283118123022, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.4057805136887886, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.4116066558734167, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.40663312273612934, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.4717437134789646, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.5536656048436931, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.6515536111902664, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.6357551120651417, "name": "Spearman Max"}]}]}]}
tomaarsen/wikipedia-tf-idf-bow
null
[ "sentence-transformers", "sentence-similarity", "feature-extraction", "loss:CosineSimilarityLoss", "en", "arxiv:1908.10084", "model-index", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:32:37+00:00
[ "1908.10084" ]
[ "en" ]
TAGS #sentence-transformers #sentence-similarity #feature-extraction #loss-CosineSimilarityLoss #en #arxiv-1908.10084 #model-index #co2_eq_emissions #endpoints_compatible #region-us
SentenceTransformer =================== This is a sentence-transformers model trained on the sentence-transformers/stsb dataset. It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. Model Details ------------- ### Model Description * Model Type: Sentence Transformer * Maximum Sequence Length: None tokens * Output Dimensionality: 512 tokens * Similarity Function: Cosine Similarity * Training Dataset: + sentence-transformers/stsb * Language: en ### Model Sources * Documentation: Sentence Transformers Documentation * Repository: Sentence Transformers on GitHub * Hugging Face: Sentence Transformers on Hugging Face ### Full Model Architecture Usage ----- ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: Then you can load this model and run inference. Evaluation ---------- ### Metrics #### Semantic Similarity * Dataset: 'sts-dev' * Evaluated with `EmbeddingSimilarityEvaluator` #### Semantic Similarity * Dataset: 'sts-test' * Evaluated with `EmbeddingSimilarityEvaluator` Training Details ---------------- ### Training Dataset #### sentence-transformers/stsb * Dataset: sentence-transformers/stsb at d999f12 * Size: 5,749 training samples * Columns: `sentence1`, `sentence2`, and `score` * Approximate statistics based on the first 1000 samples: * Samples: * Loss: `CosineSimilarityLoss` with these parameters: ### Evaluation Dataset #### sentence-transformers/stsb * Dataset: sentence-transformers/stsb at d999f12 * Size: 1,500 evaluation samples * Columns: `sentence1`, `sentence2`, and `score` * Approximate statistics based on the first 1000 samples: * Samples: * Loss: `CosineSimilarityLoss` with these parameters: ### Training Hyperparameters #### Non-Default Hyperparameters * 'eval\_strategy': steps * 'per\_device\_train\_batch\_size': 32 * 'per\_device\_eval\_batch\_size': 32 * 'num\_train\_epochs': 1 * 'warmup\_ratio': 0.1 * 'fp16': True #### All Hyperparameters Click to expand * 'overwrite\_output\_dir': False * 'do\_predict': False * 'eval\_strategy': steps * 'prediction\_loss\_only': False * 'per\_device\_train\_batch\_size': 32 * 'per\_device\_eval\_batch\_size': 32 * 'per\_gpu\_train\_batch\_size': None * 'per\_gpu\_eval\_batch\_size': None * 'gradient\_accumulation\_steps': 1 * 'eval\_accumulation\_steps': None * 'learning\_rate': 5e-05 * 'weight\_decay': 0.0 * 'adam\_beta1': 0.9 * 'adam\_beta2': 0.999 * 'adam\_epsilon': 1e-08 * 'max\_grad\_norm': 1.0 * 'num\_train\_epochs': 1 * 'max\_steps': -1 * 'lr\_scheduler\_type': linear * 'lr\_scheduler\_kwargs': {} * 'warmup\_ratio': 0.1 * 'warmup\_steps': 0 * 'log\_level': passive * 'log\_level\_replica': warning * 'log\_on\_each\_node': True * 'logging\_nan\_inf\_filter': True * 'save\_safetensors': True * 'save\_on\_each\_node': False * 'save\_only\_model': False * 'no\_cuda': False * 'use\_cpu': False * 'use\_mps\_device': False * 'seed': 42 * 'data\_seed': None * 'jit\_mode\_eval': False * 'use\_ipex': False * 'bf16': False * 'fp16': True * 'fp16\_opt\_level': O1 * 'half\_precision\_backend': auto * 'bf16\_full\_eval': False * 'fp16\_full\_eval': False * 'tf32': None * 'local\_rank': 0 * 'ddp\_backend': None * 'tpu\_num\_cores': None * 'tpu\_metrics\_debug': False * 'debug': [] * 'dataloader\_drop\_last': False * 'dataloader\_num\_workers': 0 * 'dataloader\_prefetch\_factor': None * 'past\_index': -1 * 'disable\_tqdm': False * 'remove\_unused\_columns': True * 'label\_names': None * 'load\_best\_model\_at\_end': False * 'ignore\_data\_skip': False * 'fsdp': [] * 'fsdp\_min\_num\_params': 0 * 'fsdp\_config': {'min\_num\_params': 0, 'xla': False, 'xla\_fsdp\_v2': False, 'xla\_fsdp\_grad\_ckpt': False} * 'fsdp\_transformer\_layer\_cls\_to\_wrap': None * 'accelerator\_config': {'split\_batches': False, 'dispatch\_batches': None, 'even\_batches': True, 'use\_seedable\_sampler': True, 'non\_blocking': False, 'gradient\_accumulation\_kwargs': None} * 'deepspeed': None * 'label\_smoothing\_factor': 0.0 * 'optim': adamw\_torch * 'optim\_args': None * 'adafactor': False * 'group\_by\_length': False * 'length\_column\_name': length * 'ddp\_find\_unused\_parameters': None * 'ddp\_bucket\_cap\_mb': None * 'ddp\_broadcast\_buffers': None * 'dataloader\_pin\_memory': True * 'dataloader\_persistent\_workers': False * 'skip\_memory\_metrics': True * 'use\_legacy\_prediction\_loop': False * 'push\_to\_hub': False * 'resume\_from\_checkpoint': None * 'hub\_model\_id': None * 'hub\_strategy': every\_save * 'hub\_private\_repo': False * 'hub\_always\_push': False * 'gradient\_checkpointing': False * 'gradient\_checkpointing\_kwargs': None * 'include\_inputs\_for\_metrics': False * 'eval\_do\_concat\_batches': True * 'fp16\_backend': auto * 'push\_to\_hub\_model\_id': None * 'push\_to\_hub\_organization': None * 'mp\_parameters': * 'auto\_find\_batch\_size': False * 'full\_determinism': False * 'torchdynamo': None * 'ray\_scope': last * 'ddp\_timeout': 1800 * 'torch\_compile': False * 'torch\_compile\_backend': None * 'torch\_compile\_mode': None * 'dispatch\_batches': None * 'split\_batches': None * 'include\_tokens\_per\_second': False * 'include\_num\_input\_tokens\_seen': False * 'neftune\_noise\_alpha': None * 'optim\_target\_modules': None * 'batch\_sampler': batch\_sampler * 'multi\_dataset\_batch\_sampler': proportional ### Training Logs ### Environmental Impact Carbon emissions were measured using CodeCarbon. * Energy Consumed: 0.000 kWh * Carbon Emitted: 0.000 kg of CO2 * Hours Used: 0.002 hours ### Training Hardware * On Cloud: No * GPU Model: 1 x NVIDIA GeForce RTX 3090 * CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K * RAM Size: 31.78 GB ### Framework Versions * Python: 3.11.6 * Sentence Transformers: 3.0.0.dev0 * Transformers: 4.41.0.dev0 * PyTorch: 2.3.0+cu121 * Accelerate: 0.26.1 * Datasets: 2.18.0 * Tokenizers: 0.19.1 ### BibTeX #### Sentence Transformers
[ "### Model Description\n\n\n* Model Type: Sentence Transformer\n* Maximum Sequence Length: None tokens\n* Output Dimensionality: 512 tokens\n* Similarity Function: Cosine Similarity\n* Training Dataset:\n\n\n\t+ sentence-transformers/stsb\n* Language: en", "### Model Sources\n\n\n* Documentation: Sentence Transformers Documentation\n* Repository: Sentence Transformers on GitHub\n* Hugging Face: Sentence Transformers on Hugging Face", "### Full Model Architecture\n\n\nUsage\n-----", "### Direct Usage (Sentence Transformers)\n\n\nFirst install the Sentence Transformers library:\n\n\nThen you can load this model and run inference.\n\n\nEvaluation\n----------", "### Metrics", "#### Semantic Similarity\n\n\n* Dataset: 'sts-dev'\n* Evaluated with `EmbeddingSimilarityEvaluator`", "#### Semantic Similarity\n\n\n* Dataset: 'sts-test'\n* Evaluated with `EmbeddingSimilarityEvaluator`\n\n\n\nTraining Details\n----------------", "### Training Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 5,749 training samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Evaluation Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 1,500 evaluation samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Training Hyperparameters", "#### Non-Default Hyperparameters\n\n\n* 'eval\\_strategy': steps\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'num\\_train\\_epochs': 1\n* 'warmup\\_ratio': 0.1\n* 'fp16': True", "#### All Hyperparameters\n\n\nClick to expand\n* 'overwrite\\_output\\_dir': False\n* 'do\\_predict': False\n* 'eval\\_strategy': steps\n* 'prediction\\_loss\\_only': False\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'per\\_gpu\\_train\\_batch\\_size': None\n* 'per\\_gpu\\_eval\\_batch\\_size': None\n* 'gradient\\_accumulation\\_steps': 1\n* 'eval\\_accumulation\\_steps': None\n* 'learning\\_rate': 5e-05\n* 'weight\\_decay': 0.0\n* 'adam\\_beta1': 0.9\n* 'adam\\_beta2': 0.999\n* 'adam\\_epsilon': 1e-08\n* 'max\\_grad\\_norm': 1.0\n* 'num\\_train\\_epochs': 1\n* 'max\\_steps': -1\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_kwargs': {}\n* 'warmup\\_ratio': 0.1\n* 'warmup\\_steps': 0\n* 'log\\_level': passive\n* 'log\\_level\\_replica': warning\n* 'log\\_on\\_each\\_node': True\n* 'logging\\_nan\\_inf\\_filter': True\n* 'save\\_safetensors': True\n* 'save\\_on\\_each\\_node': False\n* 'save\\_only\\_model': False\n* 'no\\_cuda': False\n* 'use\\_cpu': False\n* 'use\\_mps\\_device': False\n* 'seed': 42\n* 'data\\_seed': None\n* 'jit\\_mode\\_eval': False\n* 'use\\_ipex': False\n* 'bf16': False\n* 'fp16': True\n* 'fp16\\_opt\\_level': O1\n* 'half\\_precision\\_backend': auto\n* 'bf16\\_full\\_eval': False\n* 'fp16\\_full\\_eval': False\n* 'tf32': None\n* 'local\\_rank': 0\n* 'ddp\\_backend': None\n* 'tpu\\_num\\_cores': None\n* 'tpu\\_metrics\\_debug': False\n* 'debug': []\n* 'dataloader\\_drop\\_last': False\n* 'dataloader\\_num\\_workers': 0\n* 'dataloader\\_prefetch\\_factor': None\n* 'past\\_index': -1\n* 'disable\\_tqdm': False\n* 'remove\\_unused\\_columns': True\n* 'label\\_names': None\n* 'load\\_best\\_model\\_at\\_end': False\n* 'ignore\\_data\\_skip': False\n* 'fsdp': []\n* 'fsdp\\_min\\_num\\_params': 0\n* 'fsdp\\_config': {'min\\_num\\_params': 0, 'xla': False, 'xla\\_fsdp\\_v2': False, 'xla\\_fsdp\\_grad\\_ckpt': False}\n* 'fsdp\\_transformer\\_layer\\_cls\\_to\\_wrap': None\n* 'accelerator\\_config': {'split\\_batches': False, 'dispatch\\_batches': None, 'even\\_batches': True, 'use\\_seedable\\_sampler': True, 'non\\_blocking': False, 'gradient\\_accumulation\\_kwargs': None}\n* 'deepspeed': None\n* 'label\\_smoothing\\_factor': 0.0\n* 'optim': adamw\\_torch\n* 'optim\\_args': None\n* 'adafactor': False\n* 'group\\_by\\_length': False\n* 'length\\_column\\_name': length\n* 'ddp\\_find\\_unused\\_parameters': None\n* 'ddp\\_bucket\\_cap\\_mb': None\n* 'ddp\\_broadcast\\_buffers': None\n* 'dataloader\\_pin\\_memory': True\n* 'dataloader\\_persistent\\_workers': False\n* 'skip\\_memory\\_metrics': True\n* 'use\\_legacy\\_prediction\\_loop': False\n* 'push\\_to\\_hub': False\n* 'resume\\_from\\_checkpoint': None\n* 'hub\\_model\\_id': None\n* 'hub\\_strategy': every\\_save\n* 'hub\\_private\\_repo': False\n* 'hub\\_always\\_push': False\n* 'gradient\\_checkpointing': False\n* 'gradient\\_checkpointing\\_kwargs': None\n* 'include\\_inputs\\_for\\_metrics': False\n* 'eval\\_do\\_concat\\_batches': True\n* 'fp16\\_backend': auto\n* 'push\\_to\\_hub\\_model\\_id': None\n* 'push\\_to\\_hub\\_organization': None\n* 'mp\\_parameters':\n* 'auto\\_find\\_batch\\_size': False\n* 'full\\_determinism': False\n* 'torchdynamo': None\n* 'ray\\_scope': last\n* 'ddp\\_timeout': 1800\n* 'torch\\_compile': False\n* 'torch\\_compile\\_backend': None\n* 'torch\\_compile\\_mode': None\n* 'dispatch\\_batches': None\n* 'split\\_batches': None\n* 'include\\_tokens\\_per\\_second': False\n* 'include\\_num\\_input\\_tokens\\_seen': False\n* 'neftune\\_noise\\_alpha': None\n* 'optim\\_target\\_modules': None\n* 'batch\\_sampler': batch\\_sampler\n* 'multi\\_dataset\\_batch\\_sampler': proportional", "### Training Logs", "### Environmental Impact\n\n\nCarbon emissions were measured using CodeCarbon.\n\n\n* Energy Consumed: 0.000 kWh\n* Carbon Emitted: 0.000 kg of CO2\n* Hours Used: 0.002 hours", "### Training Hardware\n\n\n* On Cloud: No\n* GPU Model: 1 x NVIDIA GeForce RTX 3090\n* CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K\n* RAM Size: 31.78 GB", "### Framework Versions\n\n\n* Python: 3.11.6\n* Sentence Transformers: 3.0.0.dev0\n* Transformers: 4.41.0.dev0\n* PyTorch: 2.3.0+cu121\n* Accelerate: 0.26.1\n* Datasets: 2.18.0\n* Tokenizers: 0.19.1", "### BibTeX", "#### Sentence Transformers" ]
[ "TAGS\n#sentence-transformers #sentence-similarity #feature-extraction #loss-CosineSimilarityLoss #en #arxiv-1908.10084 #model-index #co2_eq_emissions #endpoints_compatible #region-us \n", "### Model Description\n\n\n* Model Type: Sentence Transformer\n* Maximum Sequence Length: None tokens\n* Output Dimensionality: 512 tokens\n* Similarity Function: Cosine Similarity\n* Training Dataset:\n\n\n\t+ sentence-transformers/stsb\n* Language: en", "### Model Sources\n\n\n* Documentation: Sentence Transformers Documentation\n* Repository: Sentence Transformers on GitHub\n* Hugging Face: Sentence Transformers on Hugging Face", "### Full Model Architecture\n\n\nUsage\n-----", "### Direct Usage (Sentence Transformers)\n\n\nFirst install the Sentence Transformers library:\n\n\nThen you can load this model and run inference.\n\n\nEvaluation\n----------", "### Metrics", "#### Semantic Similarity\n\n\n* Dataset: 'sts-dev'\n* Evaluated with `EmbeddingSimilarityEvaluator`", "#### Semantic Similarity\n\n\n* Dataset: 'sts-test'\n* Evaluated with `EmbeddingSimilarityEvaluator`\n\n\n\nTraining Details\n----------------", "### Training Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 5,749 training samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Evaluation Dataset", "#### sentence-transformers/stsb\n\n\n* Dataset: sentence-transformers/stsb at d999f12\n* Size: 1,500 evaluation samples\n* Columns: `sentence1`, `sentence2`, and `score`\n* Approximate statistics based on the first 1000 samples:\n* Samples:\n* Loss: `CosineSimilarityLoss` with these parameters:", "### Training Hyperparameters", "#### Non-Default Hyperparameters\n\n\n* 'eval\\_strategy': steps\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'num\\_train\\_epochs': 1\n* 'warmup\\_ratio': 0.1\n* 'fp16': True", "#### All Hyperparameters\n\n\nClick to expand\n* 'overwrite\\_output\\_dir': False\n* 'do\\_predict': False\n* 'eval\\_strategy': steps\n* 'prediction\\_loss\\_only': False\n* 'per\\_device\\_train\\_batch\\_size': 32\n* 'per\\_device\\_eval\\_batch\\_size': 32\n* 'per\\_gpu\\_train\\_batch\\_size': None\n* 'per\\_gpu\\_eval\\_batch\\_size': None\n* 'gradient\\_accumulation\\_steps': 1\n* 'eval\\_accumulation\\_steps': None\n* 'learning\\_rate': 5e-05\n* 'weight\\_decay': 0.0\n* 'adam\\_beta1': 0.9\n* 'adam\\_beta2': 0.999\n* 'adam\\_epsilon': 1e-08\n* 'max\\_grad\\_norm': 1.0\n* 'num\\_train\\_epochs': 1\n* 'max\\_steps': -1\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_kwargs': {}\n* 'warmup\\_ratio': 0.1\n* 'warmup\\_steps': 0\n* 'log\\_level': passive\n* 'log\\_level\\_replica': warning\n* 'log\\_on\\_each\\_node': True\n* 'logging\\_nan\\_inf\\_filter': True\n* 'save\\_safetensors': True\n* 'save\\_on\\_each\\_node': False\n* 'save\\_only\\_model': False\n* 'no\\_cuda': False\n* 'use\\_cpu': False\n* 'use\\_mps\\_device': False\n* 'seed': 42\n* 'data\\_seed': None\n* 'jit\\_mode\\_eval': False\n* 'use\\_ipex': False\n* 'bf16': False\n* 'fp16': True\n* 'fp16\\_opt\\_level': O1\n* 'half\\_precision\\_backend': auto\n* 'bf16\\_full\\_eval': False\n* 'fp16\\_full\\_eval': False\n* 'tf32': None\n* 'local\\_rank': 0\n* 'ddp\\_backend': None\n* 'tpu\\_num\\_cores': None\n* 'tpu\\_metrics\\_debug': False\n* 'debug': []\n* 'dataloader\\_drop\\_last': False\n* 'dataloader\\_num\\_workers': 0\n* 'dataloader\\_prefetch\\_factor': None\n* 'past\\_index': -1\n* 'disable\\_tqdm': False\n* 'remove\\_unused\\_columns': True\n* 'label\\_names': None\n* 'load\\_best\\_model\\_at\\_end': False\n* 'ignore\\_data\\_skip': False\n* 'fsdp': []\n* 'fsdp\\_min\\_num\\_params': 0\n* 'fsdp\\_config': {'min\\_num\\_params': 0, 'xla': False, 'xla\\_fsdp\\_v2': False, 'xla\\_fsdp\\_grad\\_ckpt': False}\n* 'fsdp\\_transformer\\_layer\\_cls\\_to\\_wrap': None\n* 'accelerator\\_config': {'split\\_batches': False, 'dispatch\\_batches': None, 'even\\_batches': True, 'use\\_seedable\\_sampler': True, 'non\\_blocking': False, 'gradient\\_accumulation\\_kwargs': None}\n* 'deepspeed': None\n* 'label\\_smoothing\\_factor': 0.0\n* 'optim': adamw\\_torch\n* 'optim\\_args': None\n* 'adafactor': False\n* 'group\\_by\\_length': False\n* 'length\\_column\\_name': length\n* 'ddp\\_find\\_unused\\_parameters': None\n* 'ddp\\_bucket\\_cap\\_mb': None\n* 'ddp\\_broadcast\\_buffers': None\n* 'dataloader\\_pin\\_memory': True\n* 'dataloader\\_persistent\\_workers': False\n* 'skip\\_memory\\_metrics': True\n* 'use\\_legacy\\_prediction\\_loop': False\n* 'push\\_to\\_hub': False\n* 'resume\\_from\\_checkpoint': None\n* 'hub\\_model\\_id': None\n* 'hub\\_strategy': every\\_save\n* 'hub\\_private\\_repo': False\n* 'hub\\_always\\_push': False\n* 'gradient\\_checkpointing': False\n* 'gradient\\_checkpointing\\_kwargs': None\n* 'include\\_inputs\\_for\\_metrics': False\n* 'eval\\_do\\_concat\\_batches': True\n* 'fp16\\_backend': auto\n* 'push\\_to\\_hub\\_model\\_id': None\n* 'push\\_to\\_hub\\_organization': None\n* 'mp\\_parameters':\n* 'auto\\_find\\_batch\\_size': False\n* 'full\\_determinism': False\n* 'torchdynamo': None\n* 'ray\\_scope': last\n* 'ddp\\_timeout': 1800\n* 'torch\\_compile': False\n* 'torch\\_compile\\_backend': None\n* 'torch\\_compile\\_mode': None\n* 'dispatch\\_batches': None\n* 'split\\_batches': None\n* 'include\\_tokens\\_per\\_second': False\n* 'include\\_num\\_input\\_tokens\\_seen': False\n* 'neftune\\_noise\\_alpha': None\n* 'optim\\_target\\_modules': None\n* 'batch\\_sampler': batch\\_sampler\n* 'multi\\_dataset\\_batch\\_sampler': proportional", "### Training Logs", "### Environmental Impact\n\n\nCarbon emissions were measured using CodeCarbon.\n\n\n* Energy Consumed: 0.000 kWh\n* Carbon Emitted: 0.000 kg of CO2\n* Hours Used: 0.002 hours", "### Training Hardware\n\n\n* On Cloud: No\n* GPU Model: 1 x NVIDIA GeForce RTX 3090\n* CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K\n* RAM Size: 31.78 GB", "### Framework Versions\n\n\n* Python: 3.11.6\n* Sentence Transformers: 3.0.0.dev0\n* Transformers: 4.41.0.dev0\n* PyTorch: 2.3.0+cu121\n* Accelerate: 0.26.1\n* Datasets: 2.18.0\n* Tokenizers: 0.19.1", "### BibTeX", "#### Sentence Transformers" ]
null
transformers
# Uploaded model - **Developed by:** universalml - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
universalml/lora_model
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:33:17+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: universalml - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: universalml\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: universalml\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
reinforcement-learning
stable-baselines3
# **MlpPolicy** Agent playing **LunarLander-v2** This is a trained model of a **MlpPolicy** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "MlpPolicy", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "256.50 +/- 13.16", "name": "mean_reward", "verified": false}]}]}]}
andreaostuni/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-25T17:34:22+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# MlpPolicy Agent playing LunarLander-v2 This is a trained model of a MlpPolicy agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# MlpPolicy Agent playing LunarLander-v2\nThis is a trained model of a MlpPolicy agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# MlpPolicy Agent playing LunarLander-v2\nThis is a trained model of a MlpPolicy agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
text-generation
transformers
# Uploaded model - **Developed by:** baconnier - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "orpo"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
baconnier/finance_orpo_llama3_8B_r64_51K
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "orpo", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:38:32+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #orpo #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: baconnier - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: baconnier\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #orpo #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: baconnier\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lleticiasilvaa/3b-synthetic-100-checkpoint-8771
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:38:36+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
ghunkins/juggernautXL_hyper_8step
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-04-25T17:41:20+00:00
[ "1910.09700" ]
[]
TAGS #diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-8b-8000-dpo-1000 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "llama3-8b-8000-dpo-1000", "results": []}]}
Yaxin1992/llama3-8b-8000-dpo-1000
null
[ "peft", "tensorboard", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-04-25T17:42:01+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us
# llama3-8b-8000-dpo-1000 This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# llama3-8b-8000-dpo-1000\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 1000", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us \n", "# llama3-8b-8000-dpo-1000\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 1000", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
text-generation
transformers
<div align="center"> <h1> MiniCPM </h1> </div> <p align="center"> <a href="https://shengdinghu.notion.site/MiniCPM-c805a17c5c8046398914e47f0542095a?pvs=4" target="_blank">MiniCPM 技术报告</a><a href="https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20?pvs=4" target="_blank"> Technical Report</a> | <a href="https://github.com/OpenBMB/OmniLMM/" target="_blank">OmniLMM 多模态模型 Multi-modal Model</a> | <a href="https://luca.cn/" target="_blank">CPM-C 千亿模型试用 ~100B Model Trial </a> </p> MiniCPM 是面壁与清华大学自然语言处理实验室共同开源的系列端侧语言大模型,主体语言模型 MiniCPM-2B 仅有 24亿(2.4B)的非词嵌入参数量。 - 经过 SFT 后,MiniCPM 在公开综合性评测集上,MiniCPM 与 Mistral-7B相近(中文、数学、代码能力更优),整体性能超越 Llama2-13B、MPT-30B、Falcon-40B 等模型。 - 经过 DPO 后,MiniCPM 在当前最接近用户体感的评测集 MTBench上,MiniCPM-2B 也超越了 Llama2-70B-Chat、Vicuna-33B、Mistral-7B-Instruct-v0.1、Zephyr-7B-alpha 等众多代表性开源大模型。 - 以 MiniCPM-2B 为基础构建端侧多模态大模型 MiniCPM-V,整体性能在同规模模型中实现最佳,超越基于 Phi-2 构建的现有多模态大模型,在部分评测集上达到与 9.6B Qwen-VL-Chat 相当甚至更好的性能。 - 经过 Int4 量化后,MiniCPM 可在手机上进行部署推理,流式输出速度略高于人类说话速度。MiniCPM-V 也首次跑通了多模态大模型在手机上的部署。 - 一张1080/2080可高效参数微调,一张3090/4090可全参数微调,一台机器可持续训练 MiniCPM,二次开发成本较低。 我们将完全开源MiniCPM-2B的模型参数供学术研究和有限商用,以及训练过程中的所有Checkpoint和大部分非专有数据供模型机理研究。 - 基于MiniCPM-2B的指令微调与人类偏好对**MiniCPM-2B-SFT/DPO。** - 基于MiniCPM-2B的多模态模型**MiniCPM-V**,能力超越基于Phi-2的同参数级别多模态模型**。** - MiniCPM-2B-SFT/DPO的Int4量化版**MiniCPM-2B-SFT/DPO-Int4。** - 基于MLC-LLM、LLMFarm开发的MiniCPM手机端程序,**文本及多模态模型均可在手机端进行推理。** MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings. - MiniCPM has very close performance compared with Mistral-7B on open-sourced general benchmarks with better ability on Chinese, Mathmetics and Coding after SFT. The overall performance exceeds Llama2-13B, MPT-30B, Falcon-40B, etc. - After DPO, MiniCPM outperforms Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, Zephyr-7B-alpha, etc. on MTBench. - MiniCPM-V, based on MiniCPM-2B, achieves the best overall performance among multimodel models of the same scale, surpassing existing multimodal large models built on Phi-2 and achieving performance comparable to or even better than 9.6B Qwen-VL-Chat on some tasks. - MiniCPM can be deployed and infer on smartphones, and the speed of streaming output is relatively higher than the verbal speed of human. MiniCPM-V is the first multi-modal models that can be deployed on smartphones. - The cost of developing based on MiniCPM is low. Parameter efficient finetuning can be conducted with a single 1080/2080 GPU and full parameter finetuning can be conducted with a 3090/4090 GPU. We release all model parameters for research and limited commercial use. We also release all the checkpoint during training and most public training data for research on model mechanism. - SFT and DPO version based on MiniCPM-2B and human preference: **MiniCPM-2B-SFT/DPO** - The multi-modal model **MiniCPM-V** based on MiniCPM-2B, which outperforms models with similar size, i.e., Phi-2 - The INT4 quantized version **MiniCPM-2B-SFT/DPO-Int4** based on MiniCPM-2B-SFT/DPO - Mobile phone application based on MLC-LLM and LLMFarm. Both language model and multimodel model can conduct inference on smartphones. ### 评测结果 Evaluation Results 详细的评测结果位于[github仓库](https://github.com/OpenBMB/MiniCPM?tab=readme-ov-file#%E8%AF%84%E6%B5%8B%E7%BB%93%E6%9E%9C) Detailed evaluation results are in [github repo](https://github.com/OpenBMB/MiniCPM/blob/main/README-en.md#evaluation-results) 注意:我们发现使用Huggingface生成质量略差于vLLM,因此推荐使用vLLM进行测试。我们正在排查原因。 Notice: We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended. We are investigating the cause now. ### 局限性 Limitations - 受限于模型规模,模型可能出现幻觉性问题。其中由于DPO模型生成的回复内容更长,更容易出现幻觉。我们也将持续进行MiniCPM模型的迭代改进; - 为了保证在学术研究用途上模型的通用性,我们未对模型进行任何身份认同训练。同时由于我们用ShareGPT开源语料作为部分训练数据,模型可能会输出类似GPT系列模型的身份认同信息; - 受限于模型规模,模型的输出受到提示词(prompt)的影响较大,可能多次尝试产生不一致的结果; - 受限于模型容量,模型的知识记忆较不准确,后续我们将结合RAG方法来增强模型的知识记忆能力。 - Due to limitations in model size, the model may experience hallucinatory issues. As DPO model tend to generate longer response, hallucinations are more likely to occur. We will also continue to iterate and improve the MiniCPM model. - To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity information similar to the GPT series models. - Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts. - Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability. ## 模型下载 Download | HuggingFace | ModelScope | WiseModel | |-------------|------------|-----------| |[sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)|[sft-bf16](https://modelscope.cn/models/OpenBMB/miniCPM-bf16)|[sft-bf16](https://wisemodel.cn/models/OpenBMB/miniCPM-bf16) |[sft-fp32](https://huggingface.co/openbmb/MiniCPM-2B-sft-fp32)|[sft-fp32](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-sft-fp32)|[sft-fp32](https://wisemodel.cn/models/OpenBMB/miniCPM-dpo-fp32) |[dpo-bf16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16)|[dpo-bf16](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16/summary)|[dpo-bf16](https://wisemodel.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16) |[dpo-fp16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp16)|[dpo-fp16](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-fp16/)|[dpo-fp16](https://wisemodel.cn/models/OpenBMB/MiniCPM-2B-dpo-fp16) |[dpo-fp32](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp32)|[dpo-fp32](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-fp32)|[dpo-fp32](https://wisemodel.cn/models/OpenBMB/miniCPM-dpo-fp32) ## 模型使用 Usage * 安装`transformers>=4.36.0`以及`accelerate`后,运行以下代码 * 注意:需要在`from_pretrained`中明确指明模型的数据类型,否则会引起较大计算误差 * Run the following code after install `transformers>=4.36.0` and `accelerate` * Warning: It is necessary to specify the data type of the model clearly in 'from_pretrained', otherwise large calculation errors will be caused ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch torch.manual_seed(0) path = 'openbmb/MiniCPM-2B-sft-bf16' tokenizer = AutoTokenizer.from_pretrained(path) model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True) responds, history = model.chat(tokenizer, "山东省最高的山是哪座山, 它比黄山高还是矮?差距多少?", temperature=0.8, top_p=0.8) print(responds) ``` * 期望输出 Expected Output ```shell 山东省最高的山是泰山,海拔1545米。 相对于黄山(海拔1864米),泰山海拔较低,相差约319米。 ``` ## 开源协议 LICENSE #### 模型协议 Model LICENSE * 本仓库中代码依照 [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) 协议开源 * MiniCPM 模型权重的使用则需要遵循 [“通用模型许可协议-来源说明-宣传限制-商业授权”](https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md)。 * MiniCPM 模型权重对学术研究完全开放。 * 如需将模型用于商业用途,请联系[email protected]来获取书面授权,在登记后亦允许免费商业使用。 * This repository is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License. * The usage of MiniCPM model weights must strictly follow [the General Model License (GML)](https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md). * The models and weights of MiniCPM are completely free for academic research. * If you intend to utilize the model for commercial purposes, please reach out to [email protected] to obtain the certificate of authorization. #### 声明 Statement * 作为一个语言模型,MiniCPM 通过学习大量的文本来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。 * 因此用户在使用 MiniCPM 生成的内容时,应自行负责对其进行评估和验证。 * 如果由于使用 MinCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 * As a language model, MiniCPM generates content by learning from a vast amount of text. * However, it does not possess the ability to comprehend or express personal opinions or value judgments. * Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. * Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. <p id="8"></p> ## 工作引用 Citation * 如果觉得MiniCPM有助于您的工作,请考虑引用下列[技术报告](https://shengdinghu.notion.site/MiniCPM-c805a17c5c8046398914e47f0542095a?pvs=4) * Please cite our [techinical report](https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20?pvs=4) if you find our work valuable. ``` @inproceedings{minicpm2024, title={MiniCPM:Unveiling the Potential of End-side Large Language Models}, booktitle={OpenBMB Blog}, year={2024} } ```
{"language": ["en", "zh"], "tags": ["MiniCPM", "ModelBest", "THUNLP"]}
Isaak-Carter/MiniCPM-2B-sft-bf16
null
[ "transformers", "minicpm", "text-generation", "MiniCPM", "ModelBest", "THUNLP", "conversational", "custom_code", "en", "zh", "autotrain_compatible", "region:us" ]
null
2024-04-25T17:42:19+00:00
[]
[ "en", "zh" ]
TAGS #transformers #minicpm #text-generation #MiniCPM #ModelBest #THUNLP #conversational #custom_code #en #zh #autotrain_compatible #region-us
MiniCPM ========= [MiniCPM 技术报告](URL target=) [Technical Report](URL target=) | [OmniLMM 多模态模型 Multi-modal Model](URL target=) | [CPM-C 千亿模型试用 ~100B Model Trial](URL target=) MiniCPM 是面壁与清华大学自然语言处理实验室共同开源的系列端侧语言大模型,主体语言模型 MiniCPM-2B 仅有 24亿(2.4B)的非词嵌入参数量。 * 经过 SFT 后,MiniCPM 在公开综合性评测集上,MiniCPM 与 Mistral-7B相近(中文、数学、代码能力更优),整体性能超越 Llama2-13B、MPT-30B、Falcon-40B 等模型。 * 经过 DPO 后,MiniCPM 在当前最接近用户体感的评测集 MTBench上,MiniCPM-2B 也超越了 Llama2-70B-Chat、Vicuna-33B、Mistral-7B-Instruct-v0.1、Zephyr-7B-alpha 等众多代表性开源大模型。 * 以 MiniCPM-2B 为基础构建端侧多模态大模型 MiniCPM-V,整体性能在同规模模型中实现最佳,超越基于 Phi-2 构建的现有多模态大模型,在部分评测集上达到与 9.6B Qwen-VL-Chat 相当甚至更好的性能。 * 经过 Int4 量化后,MiniCPM 可在手机上进行部署推理,流式输出速度略高于人类说话速度。MiniCPM-V 也首次跑通了多模态大模型在手机上的部署。 * 一张1080/2080可高效参数微调,一张3090/4090可全参数微调,一台机器可持续训练 MiniCPM,二次开发成本较低。 我们将完全开源MiniCPM-2B的模型参数供学术研究和有限商用,以及训练过程中的所有Checkpoint和大部分非专有数据供模型机理研究。 * 基于MiniCPM-2B的指令微调与人类偏好对MiniCPM-2B-SFT/DPO。 * 基于MiniCPM-2B的多模态模型MiniCPM-V,能力超越基于Phi-2的同参数级别多模态模型。 * MiniCPM-2B-SFT/DPO的Int4量化版MiniCPM-2B-SFT/DPO-Int4。 * 基于MLC-LLM、LLMFarm开发的MiniCPM手机端程序,文本及多模态模型均可在手机端进行推理。 MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings. * MiniCPM has very close performance compared with Mistral-7B on open-sourced general benchmarks with better ability on Chinese, Mathmetics and Coding after SFT. The overall performance exceeds Llama2-13B, MPT-30B, Falcon-40B, etc. * After DPO, MiniCPM outperforms Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, Zephyr-7B-alpha, etc. on MTBench. * MiniCPM-V, based on MiniCPM-2B, achieves the best overall performance among multimodel models of the same scale, surpassing existing multimodal large models built on Phi-2 and achieving performance comparable to or even better than 9.6B Qwen-VL-Chat on some tasks. * MiniCPM can be deployed and infer on smartphones, and the speed of streaming output is relatively higher than the verbal speed of human. MiniCPM-V is the first multi-modal models that can be deployed on smartphones. * The cost of developing based on MiniCPM is low. Parameter efficient finetuning can be conducted with a single 1080/2080 GPU and full parameter finetuning can be conducted with a 3090/4090 GPU. We release all model parameters for research and limited commercial use. We also release all the checkpoint during training and most public training data for research on model mechanism. * SFT and DPO version based on MiniCPM-2B and human preference: MiniCPM-2B-SFT/DPO * The multi-modal model MiniCPM-V based on MiniCPM-2B, which outperforms models with similar size, i.e., Phi-2 * The INT4 quantized version MiniCPM-2B-SFT/DPO-Int4 based on MiniCPM-2B-SFT/DPO * Mobile phone application based on MLC-LLM and LLMFarm. Both language model and multimodel model can conduct inference on smartphones. ### 评测结果 Evaluation Results 详细的评测结果位于github仓库 Detailed evaluation results are in github repo 注意:我们发现使用Huggingface生成质量略差于vLLM,因此推荐使用vLLM进行测试。我们正在排查原因。 Notice: We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended. We are investigating the cause now. ### 局限性 Limitations * 受限于模型规模,模型可能出现幻觉性问题。其中由于DPO模型生成的回复内容更长,更容易出现幻觉。我们也将持续进行MiniCPM模型的迭代改进; * 为了保证在学术研究用途上模型的通用性,我们未对模型进行任何身份认同训练。同时由于我们用ShareGPT开源语料作为部分训练数据,模型可能会输出类似GPT系列模型的身份认同信息; * 受限于模型规模,模型的输出受到提示词(prompt)的影响较大,可能多次尝试产生不一致的结果; * 受限于模型容量,模型的知识记忆较不准确,后续我们将结合RAG方法来增强模型的知识记忆能力。 * Due to limitations in model size, the model may experience hallucinatory issues. As DPO model tend to generate longer response, hallucinations are more likely to occur. We will also continue to iterate and improve the MiniCPM model. * To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity information similar to the GPT series models. * Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts. * Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability. 模型下载 Download ------------- HuggingFace: sft-bf16, ModelScope: sft-bf16, WiseModel: sft-bf16 HuggingFace: sft-fp32, ModelScope: sft-fp32, WiseModel: sft-fp32 HuggingFace: dpo-bf16, ModelScope: dpo-bf16, WiseModel: dpo-bf16 HuggingFace: dpo-fp16, ModelScope: dpo-fp16, WiseModel: dpo-fp16 HuggingFace: dpo-fp32, ModelScope: dpo-fp32, WiseModel: dpo-fp32 模型使用 Usage ---------- * 安装'transformers>=4.36.0'以及'accelerate'后,运行以下代码 * 注意:需要在'from\_pretrained'中明确指明模型的数据类型,否则会引起较大计算误差 * Run the following code after install 'transformers>=4.36.0' and 'accelerate' * Warning: It is necessary to specify the data type of the model clearly in 'from\_pretrained', otherwise large calculation errors will be caused * 期望输出 Expected Output 开源协议 LICENSE ------------ #### 模型协议 Model LICENSE * 本仓库中代码依照 Apache-2.0 协议开源 * MiniCPM 模型权重的使用则需要遵循 “通用模型许可协议-来源说明-宣传限制-商业授权”。 * MiniCPM 模型权重对学术研究完全开放。 * 如需将模型用于商业用途,请联系[email protected]来获取书面授权,在登记后亦允许免费商业使用。 * This repository is released under the Apache-2.0 License. * The usage of MiniCPM model weights must strictly follow the General Model License (GML). * The models and weights of MiniCPM are completely free for academic research. * If you intend to utilize the model for commercial purposes, please reach out to cpm@URL to obtain the certificate of authorization. #### 声明 Statement * 作为一个语言模型,MiniCPM 通过学习大量的文本来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。 * 因此用户在使用 MiniCPM 生成的内容时,应自行负责对其进行评估和验证。 * 如果由于使用 MinCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 * As a language model, MiniCPM generates content by learning from a vast amount of text. * However, it does not possess the ability to comprehend or express personal opinions or value judgments. * Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. * Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. 工作引用 Citation ------------- * 如果觉得MiniCPM有助于您的工作,请考虑引用下列技术报告 * Please cite our techinical report if you find our work valuable.
[ "### 评测结果 Evaluation Results\n\n\n详细的评测结果位于github仓库\n\n\nDetailed evaluation results are in github repo\n\n\n注意:我们发现使用Huggingface生成质量略差于vLLM,因此推荐使用vLLM进行测试。我们正在排查原因。\n\n\nNotice: We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended.\nWe are investigating the cause now.", "### 局限性 Limitations\n\n\n* 受限于模型规模,模型可能出现幻觉性问题。其中由于DPO模型生成的回复内容更长,更容易出现幻觉。我们也将持续进行MiniCPM模型的迭代改进;\n* 为了保证在学术研究用途上模型的通用性,我们未对模型进行任何身份认同训练。同时由于我们用ShareGPT开源语料作为部分训练数据,模型可能会输出类似GPT系列模型的身份认同信息;\n* 受限于模型规模,模型的输出受到提示词(prompt)的影响较大,可能多次尝试产生不一致的结果;\n* 受限于模型容量,模型的知识记忆较不准确,后续我们将结合RAG方法来增强模型的知识记忆能力。\n* Due to limitations in model size, the model may experience hallucinatory issues. As DPO model tend to generate longer response, hallucinations are more likely to occur. We will also continue to iterate and improve the MiniCPM model.\n* To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity information similar to the GPT series models.\n* Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts.\n* Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability.\n\n\n模型下载 Download\n-------------\n\n\nHuggingFace: sft-bf16, ModelScope: sft-bf16, WiseModel: sft-bf16\nHuggingFace: sft-fp32, ModelScope: sft-fp32, WiseModel: sft-fp32\nHuggingFace: dpo-bf16, ModelScope: dpo-bf16, WiseModel: dpo-bf16\nHuggingFace: dpo-fp16, ModelScope: dpo-fp16, WiseModel: dpo-fp16\nHuggingFace: dpo-fp32, ModelScope: dpo-fp32, WiseModel: dpo-fp32\n\n\n模型使用 Usage\n----------\n\n\n* 安装'transformers>=4.36.0'以及'accelerate'后,运行以下代码\n* 注意:需要在'from\\_pretrained'中明确指明模型的数据类型,否则会引起较大计算误差\n* Run the following code after install 'transformers>=4.36.0' and 'accelerate'\n* Warning: It is necessary to specify the data type of the model clearly in 'from\\_pretrained', otherwise large calculation errors will be caused\n* 期望输出 Expected Output\n\n\n开源协议 LICENSE\n------------", "#### 模型协议 Model LICENSE\n\n\n* 本仓库中代码依照 Apache-2.0 协议开源\n* MiniCPM 模型权重的使用则需要遵循 “通用模型许可协议-来源说明-宣传限制-商业授权”。\n* MiniCPM 模型权重对学术研究完全开放。\n* 如需将模型用于商业用途,请联系[email protected]来获取书面授权,在登记后亦允许免费商业使用。\n* This repository is released under the Apache-2.0 License.\n* The usage of MiniCPM model weights must strictly follow the General Model License (GML).\n* The models and weights of MiniCPM are completely free for academic research.\n* If you intend to utilize the model for commercial purposes, please reach out to cpm@URL to obtain the certificate of authorization.", "#### 声明 Statement\n\n\n* 作为一个语言模型,MiniCPM 通过学习大量的文本来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。\n* 因此用户在使用 MiniCPM 生成的内容时,应自行负责对其进行评估和验证。\n* 如果由于使用 MinCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。\n* As a language model, MiniCPM generates content by learning from a vast amount of text.\n* However, it does not possess the ability to comprehend or express personal opinions or value judgments.\n* Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers.\n* Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.\n\n\n\n工作引用 Citation\n-------------\n\n\n* 如果觉得MiniCPM有助于您的工作,请考虑引用下列技术报告\n* Please cite our techinical report if you find our work valuable." ]
[ "TAGS\n#transformers #minicpm #text-generation #MiniCPM #ModelBest #THUNLP #conversational #custom_code #en #zh #autotrain_compatible #region-us \n", "### 评测结果 Evaluation Results\n\n\n详细的评测结果位于github仓库\n\n\nDetailed evaluation results are in github repo\n\n\n注意:我们发现使用Huggingface生成质量略差于vLLM,因此推荐使用vLLM进行测试。我们正在排查原因。\n\n\nNotice: We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended.\nWe are investigating the cause now.", "### 局限性 Limitations\n\n\n* 受限于模型规模,模型可能出现幻觉性问题。其中由于DPO模型生成的回复内容更长,更容易出现幻觉。我们也将持续进行MiniCPM模型的迭代改进;\n* 为了保证在学术研究用途上模型的通用性,我们未对模型进行任何身份认同训练。同时由于我们用ShareGPT开源语料作为部分训练数据,模型可能会输出类似GPT系列模型的身份认同信息;\n* 受限于模型规模,模型的输出受到提示词(prompt)的影响较大,可能多次尝试产生不一致的结果;\n* 受限于模型容量,模型的知识记忆较不准确,后续我们将结合RAG方法来增强模型的知识记忆能力。\n* Due to limitations in model size, the model may experience hallucinatory issues. As DPO model tend to generate longer response, hallucinations are more likely to occur. We will also continue to iterate and improve the MiniCPM model.\n* To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity information similar to the GPT series models.\n* Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts.\n* Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability.\n\n\n模型下载 Download\n-------------\n\n\nHuggingFace: sft-bf16, ModelScope: sft-bf16, WiseModel: sft-bf16\nHuggingFace: sft-fp32, ModelScope: sft-fp32, WiseModel: sft-fp32\nHuggingFace: dpo-bf16, ModelScope: dpo-bf16, WiseModel: dpo-bf16\nHuggingFace: dpo-fp16, ModelScope: dpo-fp16, WiseModel: dpo-fp16\nHuggingFace: dpo-fp32, ModelScope: dpo-fp32, WiseModel: dpo-fp32\n\n\n模型使用 Usage\n----------\n\n\n* 安装'transformers>=4.36.0'以及'accelerate'后,运行以下代码\n* 注意:需要在'from\\_pretrained'中明确指明模型的数据类型,否则会引起较大计算误差\n* Run the following code after install 'transformers>=4.36.0' and 'accelerate'\n* Warning: It is necessary to specify the data type of the model clearly in 'from\\_pretrained', otherwise large calculation errors will be caused\n* 期望输出 Expected Output\n\n\n开源协议 LICENSE\n------------", "#### 模型协议 Model LICENSE\n\n\n* 本仓库中代码依照 Apache-2.0 协议开源\n* MiniCPM 模型权重的使用则需要遵循 “通用模型许可协议-来源说明-宣传限制-商业授权”。\n* MiniCPM 模型权重对学术研究完全开放。\n* 如需将模型用于商业用途,请联系[email protected]来获取书面授权,在登记后亦允许免费商业使用。\n* This repository is released under the Apache-2.0 License.\n* The usage of MiniCPM model weights must strictly follow the General Model License (GML).\n* The models and weights of MiniCPM are completely free for academic research.\n* If you intend to utilize the model for commercial purposes, please reach out to cpm@URL to obtain the certificate of authorization.", "#### 声明 Statement\n\n\n* 作为一个语言模型,MiniCPM 通过学习大量的文本来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。\n* 因此用户在使用 MiniCPM 生成的内容时,应自行负责对其进行评估和验证。\n* 如果由于使用 MinCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。\n* As a language model, MiniCPM generates content by learning from a vast amount of text.\n* However, it does not possess the ability to comprehend or express personal opinions or value judgments.\n* Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers.\n* Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.\n\n\n\n工作引用 Citation\n-------------\n\n\n* 如果觉得MiniCPM有助于您的工作,请考虑引用下列技术报告\n* Please cite our techinical report if you find our work valuable." ]
null
fastai
# Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
{"tags": ["fastai"]}
Hitomiblood/CnnLearner_resnet34_chestXrayCPU
null
[ "fastai", "region:us" ]
null
2024-04-25T17:42:28+00:00
[]
[]
TAGS #fastai #region-us
# Amazing! Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the documentation here)! 2. Create a demo in Gradio or Streamlit using Spaces (documentation here). 3. Join the fastai community on the Fastai Discord! Greetings fellow fastlearner ! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
[ "# Amazing!\n\n Congratulations on hosting your fastai model on the Hugging Face Hub!", "# Some next steps\n1. Fill out this model card with more information (see the template below and the documentation here)!\n\n2. Create a demo in Gradio or Streamlit using Spaces (documentation here).\n\n3. Join the fastai community on the Fastai Discord!\n\nGreetings fellow fastlearner ! Don't forget to delete this content from your model card.\n\n\n---", "# Model card", "## Model description\nMore information needed", "## Intended uses & limitations\nMore information needed", "## Training and evaluation data\nMore information needed" ]
[ "TAGS\n#fastai #region-us \n", "# Amazing!\n\n Congratulations on hosting your fastai model on the Hugging Face Hub!", "# Some next steps\n1. Fill out this model card with more information (see the template below and the documentation here)!\n\n2. Create a demo in Gradio or Streamlit using Spaces (documentation here).\n\n3. Join the fastai community on the Fastai Discord!\n\nGreetings fellow fastlearner ! Don't forget to delete this content from your model card.\n\n\n---", "# Model card", "## Model description\nMore information needed", "## Intended uses & limitations\nMore information needed", "## Training and evaluation data\nMore information needed" ]
text-generation
transformers
# Neura Chat llama3 8B <p align="center"> <img src="neura_llama3.png" width=512 height=256 /> </p> <!-- Provide a quick summary of what the model is/does. --> ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Neura company - **Funded by:** Neura - **Model type:** llama3 - **Language(s) (NLP):** Persian - **Finetuned from model:** meta-llama/Meta-Llama-3-8B-Instruct ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct ## Uses Check out the Google Colab demo to run NeuraChatLlama3_8b on a free-tier Google Colab instance: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SSwIa8DmI-yVsqQa_iq1nZmp5X2q0yIu?usp=sharing) make sure these packages are installed: ``` !pip install --no-deps xformers accelerate bitsandbytes !pip install -q -U transformers ``` ```python import transformers from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "Neurai/NeuraChatLlama3_8b" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", ) messages = [ {"role": "system", "content": "تو یک دستیار هوشمند هستی که به هر سوالی پاسخ مناسب و غیر تکراری و بدون تکرار و زیاده گویی می دهی "}, {"role": "user", "content": " تو چه بازارهایی در ایران سرمایه گذاری کنم که سود بیشتری ببرم؟ "}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=2048, eos_token_id=terminators, do_sample=False, temperature=0.6, top_p=1, repetition_penalty=1.05 ) print(outputs[0]["generated_text"][len(prompt):]) ``` Generated text : ``` به عنوان یک دستیار هوشمند، من نمی‌توانم توصیه‌های سرمایه‌گذاری شخصی به شما بدهم، زیرا هر سرمایه‌گذار باید بر اساس نیازهای مالی و ریسک‌پذیری خود تصمیم بگیرد. با این حال، می‌توانم برخی از بازارها و فرصت‌های سرمایه‌گذاری در ایران را که احتمالاً سود بیشتری ببرند، به شما معرفی کنم: 1. بازار طلا و ارز: بازار طلا و ارز در ایران به عنوان یکی از بازارها برای سرمایه‌گذاری در ارز و طلا شناخته شده است. با توجه به تغییرات در نرخ ارز و قیمت طلا، سرمایه‌گذاران می‌توانند از فرصت‌های سرمایه‌گذاری در این بازار بهره ببرند. 2. بازار مستغلات: بازار مستغلات در ایران به عنوان یکی از بازارهای پررونق در کشور شناخته شده است. سرمایه‌گذاران می‌توانند از فرصت‌های سرمایه‌گذاری در املاک و مستغلات بهره ببرند. 3. بازار سرمایه‌گذاری در شرکت‌های کوچک و متوسط: سرمایه‌گذاری در شرکت‌های کوچک و متوسط در ایران به عنوان یکی از بازارهای پررونق در کشور شناخته شده است. سرمایه‌گذاران می‌توانند از فرصت‌های سرمایه‌گذاری در شرکت‌های با رشد پایدار و با ریسک کمتر بهره ببرند. 4. بازار سرمایه‌گذاری در فناوریها: بازار سرمایه‌گذاری در فناوریها در ایران به عنوان یکی از بازارهای پررونق در کشور شناخته شده است. سرمایه‌گذاران می‌توانند از فرصت‌های سرمایه‌گذاری در شرکت‌های فناوری و با رشد پایدار بهره ببرند. 5. بازار سرمایه‌گذاری در طلا و سکه: بازار سرمایه‌گذاری در طلا و سکه در ایران به عنوان یکی از بازارهای پررونق در کشور شناخته شده است. سرمایه‌گذاران می‌توانند 6. بازار بورس: بازار بورس ایران به عنوان یکی از بهترین بازارها برای سرمایه‌گذاری در کشور شناخته شده است. با توجه به رشد اقتصادی و افزایش سرمایه‌گذاری در این بازار، سرمایه‌گذاران می‌توانند از فرصت‌های سرمایه‌گذاری در شرکت‌های بزرگ و با ریسک کمتر بهره ببرند. 7. بازار سهام: بازار سهام ایران یکی از بازارهای پررونق در کشور است. سرمایه‌گذاران می‌توانند از فرصت‌های سرمایه‌گذاری در شرکت‌های با رشد پایدار و با ریسک کمتر بهره ببرند. ``` ## More Information https://neura.info ## Model Card Authors Esmaeil Zahedi, Mohsen Yazdinejad ## Model Card Contact [email protected]
{"language": ["fa"], "license": "apache-2.0", "library_name": "transformers", "tags": ["llama3", "persian_llama", "neura"], "pipeline_tag": "text-generation"}
Neurai/NeuraChatLlama3_8b
null
[ "transformers", "safetensors", "llama", "text-generation", "llama3", "persian_llama", "neura", "conversational", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-25T17:44:34+00:00
[]
[ "fa" ]
TAGS #transformers #safetensors #llama #text-generation #llama3 #persian_llama #neura #conversational #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Neura Chat llama3 8B <p align="center"> <img src="neura_llama3.png" width=512 height=256 /> </p> ## Model Description - Developed by: Neura company - Funded by: Neura - Model type: llama3 - Language(s) (NLP): Persian - Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct ### Model Sources - Repository: URL ## Uses Check out the Google Colab demo to run NeuraChatLlama3_8b on a free-tier Google Colab instance: ![Open In Colab](URL make sure these packages are installed: Generated text : ## More Information URL ## Model Card Authors Esmaeil Zahedi, Mohsen Yazdinejad ## Model Card Contact info@URL
[ "# Neura Chat llama3 8B\n\n\n<p align=\"center\">\n <img src=\"neura_llama3.png\" width=512 height=256 />\n</p>", "## Model Description\n\n\n\n- Developed by: Neura company\n- Funded by: Neura\n- Model type: llama3\n- Language(s) (NLP): Persian\n- Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct", "### Model Sources\n\n\n\n- Repository: URL", "## Uses\n\nCheck out the Google Colab demo to run NeuraChatLlama3_8b on a free-tier Google Colab instance: ![Open In Colab](URL\n\nmake sure these packages are installed:\n\n\nGenerated text :", "## More Information\nURL", "## Model Card Authors\nEsmaeil Zahedi, Mohsen Yazdinejad", "## Model Card Contact\ninfo@URL" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #llama3 #persian_llama #neura #conversational #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Neura Chat llama3 8B\n\n\n<p align=\"center\">\n <img src=\"neura_llama3.png\" width=512 height=256 />\n</p>", "## Model Description\n\n\n\n- Developed by: Neura company\n- Funded by: Neura\n- Model type: llama3\n- Language(s) (NLP): Persian\n- Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct", "### Model Sources\n\n\n\n- Repository: URL", "## Uses\n\nCheck out the Google Colab demo to run NeuraChatLlama3_8b on a free-tier Google Colab instance: ![Open In Colab](URL\n\nmake sure these packages are installed:\n\n\nGenerated text :", "## More Information\nURL", "## Model Card Authors\nEsmaeil Zahedi, Mohsen Yazdinejad", "## Model Card Contact\ninfo@URL" ]
text-generation
transformers
# Uploaded model - **Developed by:** sebdg - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
sebdg/llama3-8b-merged
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:46:38+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: sebdg - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: sebdg\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: sebdg\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Meta-Llama-3-8B-Instruct-miracl-mix-raft-sft-25th-apr-v1.0 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the nthakur/miracl-raft-sft-instruct-v0.1, the nthakur/nomiracl-raft-sft-instruct-v0.1, the nthakur/miracl-en-x-raft-sft-instruct-v0.1 and the nthakur/miracl-x-en-raft-sft-instruct-v0.1 datasets. It achieves the following results on the evaluation set: - Loss: 1.3064 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4903 | 0.09 | 200 | 1.3961 | | 1.465 | 0.18 | 400 | 1.3499 | | 1.4193 | 0.28 | 600 | 1.3330 | | 1.3593 | 0.37 | 800 | 1.3232 | | 1.3552 | 0.46 | 1000 | 1.3166 | | 1.3685 | 0.55 | 1200 | 1.3123 | | 1.3487 | 0.64 | 1400 | 1.3094 | | 1.3891 | 0.74 | 1600 | 1.3076 | | 1.3858 | 0.83 | 1800 | 1.3067 | | 1.3635 | 0.92 | 2000 | 1.3064 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "other", "library_name": "peft", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["nthakur/miracl-raft-sft-instruct-v0.1", "nthakur/nomiracl-raft-sft-instruct-v0.1", "nthakur/miracl-en-x-raft-sft-instruct-v0.1", "nthakur/miracl-x-en-raft-sft-instruct-v0.1"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "Meta-Llama-3-8B-Instruct-miracl-mix-raft-sft-25th-apr-v1.0", "results": []}]}
nthakur/Meta-Llama-3-8B-Instruct-miracl-mix-raft-sft-25th-apr-v1.0
null
[ "peft", "safetensors", "llama", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:nthakur/miracl-raft-sft-instruct-v0.1", "dataset:nthakur/nomiracl-raft-sft-instruct-v0.1", "dataset:nthakur/miracl-en-x-raft-sft-instruct-v0.1", "dataset:nthakur/miracl-x-en-raft-sft-instruct-v0.1", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-04-25T17:47:41+00:00
[]
[]
TAGS #peft #safetensors #llama #alignment-handbook #trl #sft #generated_from_trainer #dataset-nthakur/miracl-raft-sft-instruct-v0.1 #dataset-nthakur/nomiracl-raft-sft-instruct-v0.1 #dataset-nthakur/miracl-en-x-raft-sft-instruct-v0.1 #dataset-nthakur/miracl-x-en-raft-sft-instruct-v0.1 #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us
Meta-Llama-3-8B-Instruct-miracl-mix-raft-sft-25th-apr-v1.0 ========================================================== This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the nthakur/miracl-raft-sft-instruct-v0.1, the nthakur/nomiracl-raft-sft-instruct-v0.1, the nthakur/miracl-en-x-raft-sft-instruct-v0.1 and the nthakur/miracl-x-en-raft-sft-instruct-v0.1 datasets. It achieves the following results on the evaluation set: * Loss: 1.3064 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 4 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * total\_eval\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #llama #alignment-handbook #trl #sft #generated_from_trainer #dataset-nthakur/miracl-raft-sft-instruct-v0.1 #dataset-nthakur/nomiracl-raft-sft-instruct-v0.1 #dataset-nthakur/miracl-en-x-raft-sft-instruct-v0.1 #dataset-nthakur/miracl-x-en-raft-sft-instruct-v0.1 #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Uploaded model - **Developed by:** sebdg - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
sebdg/llama3-8b
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:50:38+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: sebdg - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: sebdg\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: sebdg\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Lennard-Heuer/DPO-V1
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-25T17:51:18+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# hus960/MistralTrix8x9B-Q2_K-GGUF This model was converted to GGUF format from [`Kquant03/MistralTrix8x9B`](https://huggingface.co/Kquant03/MistralTrix8x9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Kquant03/MistralTrix8x9B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo hus960/MistralTrix8x9B-Q2_K-GGUF --model mistraltrix8x9b.Q2_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo hus960/MistralTrix8x9B-Q2_K-GGUF --model mistraltrix8x9b.Q2_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistraltrix8x9b.Q2_K.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"]}
hus960/MistralTrix8x9B-Q2_K-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "license:apache-2.0", "region:us" ]
null
2024-04-25T17:52:11+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
# hus960/MistralTrix8x9B-Q2_K-GGUF This model was converted to GGUF format from 'Kquant03/MistralTrix8x9B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# hus960/MistralTrix8x9B-Q2_K-GGUF\nThis model was converted to GGUF format from 'Kquant03/MistralTrix8x9B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n", "# hus960/MistralTrix8x9B-Q2_K-GGUF\nThis model was converted to GGUF format from 'Kquant03/MistralTrix8x9B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ynir/gemma-Code-Instruct-Finetune-idk_v1_big
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-25T17:54:15+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]