pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
fill-mask
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
emma7897/distilbert_two
null
[ "transformers", "safetensors", "distilbert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-20T05:25:47+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #distilbert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #distilbert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
sentence-transformers
[Sugar Defender](https://icsfs.microsoftcrmportals.com/forums/general-discussion/752befe8-1dfe-ee11-a81c-000d3a289315) It's a vital wellspring of energy for your phones and organs, especially for the mind. Glucose levels in the blood should be painstakingly managed as too high or too low levels can cause medical problems.High blood sugar, known as hyperglycemia, can be connected to conditions like diabetes. Side effects might incorporate unreasonable thirst, successive pee, exhaustion, and obscured vision. Then again, low blood sugar, or hypoglycemia, can cause side effects like flimsiness, perspiring, disarray, and, in serious cases, loss of cognizance. VISIT HERE FOR OFFICIAL WEBSITE:-https://icsfs.microsoftcrmportals.com/forums/general-discussion/752befe8-1dfe-ee11-a81c-000d3a289315
{"language": ["en"], "license": "bigscience-openrail-m", "library_name": "sentence-transformers", "tags": ["Sugar Defender"]}
sephichapdson/SugarDefender
null
[ "sentence-transformers", "Sugar Defender", "en", "license:bigscience-openrail-m", "region:us" ]
null
2024-04-20T05:26:14+00:00
[]
[ "en" ]
TAGS #sentence-transformers #Sugar Defender #en #license-bigscience-openrail-m #region-us
Sugar Defender It's a vital wellspring of energy for your phones and organs, especially for the mind. Glucose levels in the blood should be painstakingly managed as too high or too low levels can cause medical problems.High blood sugar, known as hyperglycemia, can be connected to conditions like diabetes. Side effects might incorporate unreasonable thirst, successive pee, exhaustion, and obscured vision. Then again, low blood sugar, or hypoglycemia, can cause side effects like flimsiness, perspiring, disarray, and, in serious cases, loss of cognizance. VISIT HERE FOR OFFICIAL WEBSITE:-URL
[]
[ "TAGS\n#sentence-transformers #Sugar Defender #en #license-bigscience-openrail-m #region-us \n" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B-Instruct model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: true load_in_4bit: false strict: false datasets: - path: CognitiveLab/Samvaad_Hindi_Hinglish_Llama3_Prompt_formate type: completion field: text dataset_prepared_path: val_set_size: 0.05 output_dir: ./llama3-gaja-v0.1 sequence_len: 8000 sample_packing: true pad_to_sequence_len: true adapter: lora lora_model_dir: lora_r: 64 lora_alpha: 32 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: wandb_project: llama3-gaja-v0.1 wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 6 micro_batch_size: 4 num_epochs: 1 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.00002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true s2_attention: warmup_steps: 10 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 2 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: <|end_of_text|> ``` </details><br> # llama3-gaja-v0.1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 6 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8709 | 0.0 | 1 | 1.8383 | | 1.1106 | 0.25 | 128 | 1.0989 | | 1.0379 | 0.5 | 256 | 1.0510 | | 1.0402 | 0.75 | 384 | 1.0386 | | 1.0703 | 1.0 | 512 | 1.0365 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
{"license": "other", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "llama3-gaja-v0.1", "results": []}]}
AdithyaSK/LLama3-Gaja-Hindi-8B-Instruct-alpha
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "8-bit", "region:us" ]
null
2024-04-20T05:31:48+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #8-bit #region-us
<img src="URL alt="Built with Axolotl" width="200" height="32"/> See axolotl config axolotl version: '0.4.0' llama3-gaja-v0.1 ================ This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.0365 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 6 * total\_train\_batch\_size: 24 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 10 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.0.dev0 * Pytorch 2.0.1+cu118 * Datasets 2.15.0 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 6\n* total\\_train\\_batch\\_size: 24\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.0.1+cu118\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #8-bit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 6\n* total\\_train\\_batch\\_size: 24\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.0.1+cu118\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/LlamaAdapter-llama2-happy-1000-temp-new3e-05
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T05:31:50+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-roberta-base-sentiment-latest-trump-stance This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3134 - Accuracy: {'accuracy': 0.87875} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------------------:| | 0.5486 | 1.0 | 1800 | 0.5270 | {'accuracy': 0.79375} | | 0.4626 | 2.0 | 3600 | 0.4231 | {'accuracy': 0.85375} | | 0.4216 | 3.0 | 5400 | 0.3134 | {'accuracy': 0.87875} | ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "cardiffnlp/twitter-roberta-base-sentiment-latest", "model-index": [{"name": "twitter-roberta-base-sentiment-latest-trump-stance", "results": []}]}
saideep-arikontham/twitter-roberta-base-sentiment-latest-trump-stance
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:cardiffnlp/twitter-roberta-base-sentiment-latest", "region:us" ]
null
2024-04-20T05:34:05+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-cardiffnlp/twitter-roberta-base-sentiment-latest #region-us
twitter-roberta-base-sentiment-latest-trump-stance ================================================== This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-sentiment-latest on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.3134 * Accuracy: {'accuracy': 0.87875} Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.001 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.38.2 * Pytorch 2.2.1 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-cardiffnlp/twitter-roberta-base-sentiment-latest #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
<img src=https://huggingface.co/lodrick-the-lafted/Copus-2x8B/resolve/main/copus.png> MoE'd up: - [dreamgen/opus-v1.2-llama-3-8b](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b) - [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
{"license": "llama2"}
blockblockblock/Copus-2x8B-bpw3
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "3-bit", "region:us" ]
null
2024-04-20T05:35:54+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us
<img src=URL MoE'd up: - dreamgen/opus-v1.2-llama-3-8b - NousResearch/Meta-Llama-3-8B-Instruct_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
[]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us \n" ]
text-generation
transformers
# Tinyllama-moe3 Dopey-karasu-MoE3 is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [vihangd/DopeyTinyLlama-1.1B-v1](https://huggingface.co/vihangd/DopeyTinyLlama-1.1B-v1) * [Tensoic/TinyLlama-1.1B-3T-openhermes](https://huggingface.co/Tensoic/TinyLlama-1.1B-3T-openhermes) ## 🧩 Configuration ```yaml base_model: vihangd/DopeyTinyLlama-1.1B-v1 experts: - source_model: vihangd/DopeyTinyLlama-1.1B-v1 positive_prompts: - "chat" - "assistant" - "tell me" - "explain" - source_model: Tensoic/TinyLlama-1.1B-3T-openhermes positive_prompts: - "reason" - "provide" - "instruct" - "summarize" - "count" ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "aipib/Dopey-karasu-MoE3" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "vihangd/DopeyTinyLlama-1.1B-v1", "Tensoic/TinyLlama-1.1B-3T-openhermes"], "base_model": ["vihangd/DopeyTinyLlama-1.1B-v1", "Tensoic/TinyLlama-1.1B-3T-openhermes"]}
aipib/Tinyllama-moe3
null
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "vihangd/DopeyTinyLlama-1.1B-v1", "Tensoic/TinyLlama-1.1B-3T-openhermes", "conversational", "base_model:vihangd/DopeyTinyLlama-1.1B-v1", "base_model:Tensoic/TinyLlama-1.1B-3T-openhermes", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T05:36:15+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #vihangd/DopeyTinyLlama-1.1B-v1 #Tensoic/TinyLlama-1.1B-3T-openhermes #conversational #base_model-vihangd/DopeyTinyLlama-1.1B-v1 #base_model-Tensoic/TinyLlama-1.1B-3T-openhermes #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Tinyllama-moe3 Dopey-karasu-MoE3 is a Mixture of Experts (MoE) made with the following models using LazyMergekit: * vihangd/DopeyTinyLlama-1.1B-v1 * Tensoic/TinyLlama-1.1B-3T-openhermes ## Configuration ## Usage
[ "# Tinyllama-moe3\n\nDopey-karasu-MoE3 is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* vihangd/DopeyTinyLlama-1.1B-v1\n* Tensoic/TinyLlama-1.1B-3T-openhermes", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #vihangd/DopeyTinyLlama-1.1B-v1 #Tensoic/TinyLlama-1.1B-3T-openhermes #conversational #base_model-vihangd/DopeyTinyLlama-1.1B-v1 #base_model-Tensoic/TinyLlama-1.1B-3T-openhermes #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Tinyllama-moe3\n\nDopey-karasu-MoE3 is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* vihangd/DopeyTinyLlama-1.1B-v1\n* Tensoic/TinyLlama-1.1B-3T-openhermes", "## Configuration", "## Usage" ]
null
mlx
# lucataco/Mixtral-8x7B-Instruct-v0.1-4bit This model was converted to MLX format from [`mistralai/Mixtral-8x7B-Instruct-v0.1`]() using mlx-lm version **0.10.0**. Refer to the [original model card](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("lucataco/Mixtral-8x7B-Instruct-v0.1-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["fr", "it", "de", "es", "en"], "license": "apache-2.0", "tags": ["mlx"], "inference": {"parameters": {"temperature": 0.5}}, "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
lucataco/Mixtral-8x7B-Instruct-v0.1-4bit
null
[ "mlx", "safetensors", "mixtral", "fr", "it", "de", "es", "en", "license:apache-2.0", "region:us" ]
null
2024-04-20T05:37:06+00:00
[]
[ "fr", "it", "de", "es", "en" ]
TAGS #mlx #safetensors #mixtral #fr #it #de #es #en #license-apache-2.0 #region-us
# lucataco/Mixtral-8x7B-Instruct-v0.1-4bit This model was converted to MLX format from ['mistralai/Mixtral-8x7B-Instruct-v0.1']() using mlx-lm version 0.10.0. Refer to the original model card for more details on the model. ## Use with mlx
[ "# lucataco/Mixtral-8x7B-Instruct-v0.1-4bit\nThis model was converted to MLX format from ['mistralai/Mixtral-8x7B-Instruct-v0.1']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #mixtral #fr #it #de #es #en #license-apache-2.0 #region-us \n", "# lucataco/Mixtral-8x7B-Instruct-v0.1-4bit\nThis model was converted to MLX format from ['mistralai/Mixtral-8x7B-Instruct-v0.1']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large-otat-recommened-hp This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the DandinPower/review_onlytitleandtext dataset. It achieves the following results on the evaluation set: - Loss: 0.8169 - Accuracy: 0.6686 - Macro F1: 0.6662 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 0.7726 | 1.14 | 500 | 0.8107 | 0.6613 | 0.6602 | | 0.6983 | 2.29 | 1000 | 0.7739 | 0.669 | 0.6662 | | 0.6504 | 3.43 | 1500 | 0.7891 | 0.6726 | 0.6725 | | 0.6067 | 4.57 | 2000 | 0.8169 | 0.6686 | 0.6662 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["en"], "license": "mit", "tags": ["nycu-112-2-datamining-hw2", "generated_from_trainer"], "datasets": ["DandinPower/review_onlytitleandtext"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "deberta-v3-large-otat-recommened-hp", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "DandinPower/review_onlytitleandtext", "type": "DandinPower/review_onlytitleandtext"}, "metrics": [{"type": "accuracy", "value": 0.6685714285714286, "name": "Accuracy"}]}]}]}
DandinPower/deberta-v3-large-otat-recommened-hp
null
[ "transformers", "safetensors", "deberta-v2", "text-classification", "nycu-112-2-datamining-hw2", "generated_from_trainer", "en", "dataset:DandinPower/review_onlytitleandtext", "base_model:microsoft/deberta-v3-large", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T05:37:34+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #deberta-v2 #text-classification #nycu-112-2-datamining-hw2 #generated_from_trainer #en #dataset-DandinPower/review_onlytitleandtext #base_model-microsoft/deberta-v3-large #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
deberta-v3-large-otat-recommened-hp =================================== This model is a fine-tuned version of microsoft/deberta-v3-large on the DandinPower/review\_onlytitleandtext dataset. It achieves the following results on the evaluation set: * Loss: 0.8169 * Accuracy: 0.6686 * Macro F1: 0.6662 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 6e-06 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 50 * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-06\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #deberta-v2 #text-classification #nycu-112-2-datamining-hw2 #generated_from_trainer #en #dataset-DandinPower/review_onlytitleandtext #base_model-microsoft/deberta-v3-large #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-06\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-53-CV-demo-google-colab-Ezra_William_Prod16 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_13_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3355 - Wer: 0.3031 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9606 | 1.0 | 278 | 2.9210 | 1.0 | | 2.8429 | 2.0 | 556 | 2.1290 | 1.0 | | 0.9644 | 3.0 | 834 | 0.5957 | 0.5614 | | 0.6414 | 4.0 | 1112 | 0.4595 | 0.4643 | | 0.5396 | 5.0 | 1390 | 0.4189 | 0.4090 | | 0.4334 | 6.0 | 1668 | 0.3778 | 0.3670 | | 0.3939 | 7.0 | 1946 | 0.3777 | 0.3544 | | 0.3738 | 8.0 | 2224 | 0.3511 | 0.3355 | | 0.3387 | 9.0 | 2502 | 0.3569 | 0.3240 | | 0.3071 | 10.0 | 2780 | 0.3405 | 0.3165 | | 0.3129 | 11.0 | 3058 | 0.3313 | 0.3065 | | 0.2971 | 12.0 | 3336 | 0.3355 | 0.3031 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_13_0"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-xlsr-53-CV-demo-google-colab-Ezra_William_Prod16", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "common_voice_13_0", "type": "common_voice_13_0", "config": "id", "split": "test", "args": "id"}, "metrics": [{"type": "wer", "value": 0.3030973451327434, "name": "Wer"}]}]}]}
EzraWilliam/wav2vec2-xlsr-53-CV-demo-google-colab-Ezra_William_Prod16
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_13_0", "base_model:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-20T05:41:56+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_13_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-xlsr-53-CV-demo-google-colab-Ezra\_William\_Prod16 =========================================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice\_13\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.3355 * Wer: 0.3031 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 12 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.2+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_13_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
mlx
# lucataco/Meta-Llama-3-8B-4bit This model was converted to MLX format from [`meta-llama/Meta-Llama-3-8B`]() using mlx-lm version **0.10.0**. Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("lucataco/Meta-Llama-3-8B-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "mlx"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
lucataco/Meta-Llama-3-8B-4bit
null
[ "mlx", "safetensors", "llama", "facebook", "meta", "pytorch", "llama-3", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-20T05:43:31+00:00
[]
[ "en" ]
TAGS #mlx #safetensors #llama #facebook #meta #pytorch #llama-3 #text-generation #en #license-other #region-us
# lucataco/Meta-Llama-3-8B-4bit This model was converted to MLX format from ['meta-llama/Meta-Llama-3-8B']() using mlx-lm version 0.10.0. Refer to the original model card for more details on the model. ## Use with mlx
[ "# lucataco/Meta-Llama-3-8B-4bit\nThis model was converted to MLX format from ['meta-llama/Meta-Llama-3-8B']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #llama #facebook #meta #pytorch #llama-3 #text-generation #en #license-other #region-us \n", "# lucataco/Meta-Llama-3-8B-4bit\nThis model was converted to MLX format from ['meta-llama/Meta-Llama-3-8B']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text-generation
transformers
## Llamacpp Quantizations of Llama-3-Smaug-8B This model has the <|eot_id|> token set to not-special, which seems to work better with current inference engines. Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> fork from pcuenca <a href="https://github.com/pcuenca/llama.cpp/tree/llama3-conversion">llama3-conversion</a> for quantization. Original model: https://huggingface.co/abacusai/Llama-3-Smaug-8B All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Llama-3-Smaug-8B-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [Llama-3-Smaug-8B-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [Llama-3-Smaug-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [Llama-3-Smaug-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [Llama-3-Smaug-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Llama-3-Smaug-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [Llama-3-Smaug-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Llama-3-Smaug-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Llama-3-Smaug-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [Llama-3-Smaug-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [Llama-3-Smaug-8B-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Llama-3-Smaug-8B-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Llama-3-Smaug-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [Llama-3-Smaug-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Llama-3-Smaug-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Llama-3-Smaug-8B-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [Llama-3-Smaug-8B-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Llama-3-Smaug-8B-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [Llama-3-Smaug-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | | [Llama-3-Smaug-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. | | [Llama-3-Smaug-8B-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. | | [Llama-3-Smaug-8B-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-Smaug-8B-GGUF/blob/main/Llama-3-Smaug-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"license": "llama2", "library_name": "transformers", "quantized_by": "bartowski", "pipeline_tag": "text-generation"}
bartowski/Llama-3-Smaug-8B-GGUF
null
[ "transformers", "gguf", "text-generation", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-20T05:46:42+00:00
[]
[]
TAGS #transformers #gguf #text-generation #license-llama2 #endpoints_compatible #region-us
Llamacpp Quantizations of Llama-3-Smaug-8B ------------------------------------------ This model has the <|eot\_id|> token set to not-special, which seems to work better with current inference engines. Using <a href="URL fork from pcuenca <a href="URL for quantization. Original model: URL All quants made using imatrix option with dataset provided by Kalomaze here Prompt format ------------- Download a file (not the whole branch) from below: -------------------------------------------------- Which file should I choose? --------------------------- A great write up with charts showing various performances is provided by Artefact2 here The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX\_K\_X', like Q5\_K\_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: URL feature matrix But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX\_X, like IQ3\_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: URL
[]
[ "TAGS\n#transformers #gguf #text-generation #license-llama2 #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/dumbo-stable3
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T05:47:25+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # byt5_add This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1606 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 800 - eval_batch_size: 800 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 13 | 4.2259 | | No log | 2.0 | 26 | 2.4178 | | No log | 3.0 | 39 | 1.9256 | | No log | 4.0 | 52 | 1.7310 | | No log | 5.0 | 65 | 1.6577 | | No log | 6.0 | 78 | 1.6385 | | No log | 7.0 | 91 | 1.6110 | | No log | 8.0 | 104 | 1.5811 | | No log | 9.0 | 117 | 1.5237 | | No log | 10.0 | 130 | 1.4809 | | No log | 11.0 | 143 | 1.4378 | | No log | 12.0 | 156 | 1.3976 | | No log | 13.0 | 169 | 1.3462 | | No log | 14.0 | 182 | 1.2587 | | No log | 15.0 | 195 | 1.2260 | | No log | 16.0 | 208 | 1.1018 | | No log | 17.0 | 221 | 1.0273 | | No log | 18.0 | 234 | 0.9436 | | No log | 19.0 | 247 | 0.8007 | | No log | 20.0 | 260 | 0.6919 | | No log | 21.0 | 273 | 0.6201 | | No log | 22.0 | 286 | 0.5486 | | No log | 23.0 | 299 | 0.4804 | | No log | 24.0 | 312 | 0.4080 | | No log | 25.0 | 325 | 0.3861 | | No log | 26.0 | 338 | 0.3477 | | No log | 27.0 | 351 | 0.3181 | | No log | 28.0 | 364 | 0.2921 | | No log | 29.0 | 377 | 0.2832 | | No log | 30.0 | 390 | 0.2693 | | No log | 31.0 | 403 | 0.2469 | | No log | 32.0 | 416 | 0.2453 | | No log | 33.0 | 429 | 0.2313 | | No log | 34.0 | 442 | 0.2134 | | No log | 35.0 | 455 | 0.2139 | | No log | 36.0 | 468 | 0.2088 | | No log | 37.0 | 481 | 0.2007 | | No log | 38.0 | 494 | 0.1960 | | 1.3 | 39.0 | 507 | 0.1830 | | 1.3 | 40.0 | 520 | 0.1782 | | 1.3 | 41.0 | 533 | 0.1746 | | 1.3 | 42.0 | 546 | 0.1741 | | 1.3 | 43.0 | 559 | 0.1708 | | 1.3 | 44.0 | 572 | 0.1668 | | 1.3 | 45.0 | 585 | 0.1650 | | 1.3 | 46.0 | 598 | 0.1651 | | 1.3 | 47.0 | 611 | 0.1629 | | 1.3 | 48.0 | 624 | 0.1627 | | 1.3 | 49.0 | 637 | 0.1610 | | 1.3 | 50.0 | 650 | 0.1606 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/byt5-small", "model-index": [{"name": "byt5_add", "results": []}]}
AlexWang99/byt5_add_10k
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/byt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T05:47:54+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google/byt5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
byt5\_add ========= This model is a fine-tuned version of google/byt5-small on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1606 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 800 * eval\_batch\_size: 800 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 50 ### Training results ### Framework versions * Transformers 4.35.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 800\n* eval\\_batch\\_size: 800\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google/byt5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 800\n* eval\\_batch\\_size: 800\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-to-image
diffusers
# AutoTrain SDXL LoRA DreamBooth - vietvo/sdxl-lora-viet <Gallery /> ## Model description These are vietvo/sdxl-lora-viet LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use A photo of viet and thao wearing casual clothes, taking a selfie, and smiling. to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](vietvo/sdxl-lora-viet/tree/main) them in the Files & versions tab.
{"license": "openrail++", "tags": ["autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "A photo of viet and thao wearing casual clothes, taking a selfie, and smiling."}
vietvo/sdxl-lora-viet
null
[ "diffusers", "autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-20T05:48:45+00:00
[]
[]
TAGS #diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# AutoTrain SDXL LoRA DreamBooth - vietvo/sdxl-lora-viet <Gallery /> ## Model description These are vietvo/sdxl-lora-viet LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use A photo of viet and thao wearing casual clothes, taking a selfie, and smiling. to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
[ "# AutoTrain SDXL LoRA DreamBooth - vietvo/sdxl-lora-viet\n\n<Gallery />", "## Model description\n\nThese are vietvo/sdxl-lora-viet LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.", "## Trigger words\n\nYou should use A photo of viet and thao wearing casual clothes, taking a selfie, and smiling. to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
[ "TAGS\n#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# AutoTrain SDXL LoRA DreamBooth - vietvo/sdxl-lora-viet\n\n<Gallery />", "## Model description\n\nThese are vietvo/sdxl-lora-viet LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.", "## Trigger words\n\nYou should use A photo of viet and thao wearing casual clothes, taking a selfie, and smiling. to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
text-generation
mlx
# lucataco/Meta-Llama-3-8B-Instruct-4bit This model was converted to MLX format from [`meta-llama/Meta-Llama-3-8B-Instruct`]() using mlx-lm version **0.10.0**. Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("lucataco/Meta-Llama-3-8B-Instruct-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "mlx"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
lucataco/Meta-Llama-3-8B-Instruct-4bit
null
[ "mlx", "safetensors", "llama", "facebook", "meta", "pytorch", "llama-3", "text-generation", "conversational", "en", "license:other", "region:us" ]
null
2024-04-20T05:49:04+00:00
[]
[ "en" ]
TAGS #mlx #safetensors #llama #facebook #meta #pytorch #llama-3 #text-generation #conversational #en #license-other #region-us
# lucataco/Meta-Llama-3-8B-Instruct-4bit This model was converted to MLX format from ['meta-llama/Meta-Llama-3-8B-Instruct']() using mlx-lm version 0.10.0. Refer to the original model card for more details on the model. ## Use with mlx
[ "# lucataco/Meta-Llama-3-8B-Instruct-4bit\nThis model was converted to MLX format from ['meta-llama/Meta-Llama-3-8B-Instruct']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #llama #facebook #meta #pytorch #llama-3 #text-generation #conversational #en #license-other #region-us \n", "# lucataco/Meta-Llama-3-8B-Instruct-4bit\nThis model was converted to MLX format from ['meta-llama/Meta-Llama-3-8B-Instruct']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
satishsingh90/FineTuneBert_toxic_comment
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T05:52:04+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** Rupesh2 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
Rupesh2/Llama3Hindi
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-20T05:52:26+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: Rupesh2 - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: Rupesh2\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: Rupesh2\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="yunkimmy/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
yunkimmy/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-20T05:56:18+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
null
null
# Abnehm Gummibärchen Test Deutschland Erfahrungen Bewertung Preis, Kaufen Abnehm Gummibärchen Test Deutschland Neu ist die Idee vom Abnehm-Gummibärchen nicht. Ein Autor mit dem Pseudonym Adrian Janson behauptete bereits 2011 in seinem Buch "Die Gummibärchen-Diät - Abnehmen mit Bärenkraft", er habe dank Fruchtgummis vor jeder Hauptmahlzeit acht Kilo in drei Monaten abgenommen. Das Wichtigste der Diät sei die richtige Dosierung und die Zeit, in denen die Bärchen verdrückt werden müssten, so der Autor. ## **[Klicken Sie hier, um jetzt auf der offiziellen Website von Abnehm zu kaufen](https://adtocart.xyz/abnehm-de)** ## Welche Auswirkungen hat die Gewichtsabnahme durch Abnehm Gummibärchen? Abnehmen Gummibärchen Erfahrungen sorgt für einen besseren Stoffwechsel im gesamten Körper, so dass die Fettverbrennung konstant bleibt. Der Prozess der Thermogenese verbessert die Wärme im Inneren, so dass zusätzliche Fettablagerungen leicht geschmolzen werden können. Darüber hinaus gibt es eine Trennung des Hungers, um das Risiko von Fettleibigkeit zu verringern. Der verbesserte Stoffwechsel im Körper verringert das Verlangen nach ungesunden Snacks. Es steuert auch das Herz-Kreislauf-System und sorgt auf natürliche Weise für ein gesundes Gewicht. Wenn Benutzer wirklich weiter abnehmen möchten, auch wenn sie nichts tun, ist die Wahl dieser High-End-Lösung eine großartige Option. Es würde Ihr gesamtes Herz-Kreislauf-System stärken und die Ketose für einen gesunden und schnellen Gewichtsverlust auslösen. ##Abnehm Gummibärchen-Zutaten zur Gewichtsreduktion vorgestellt Die in der Formel enthaltenen exklusiven Zusatzstoffe stärken den gesamten Körper und sorgen für Ergebnisse bei der Gewichtskontrolle. Die Verwendung von Abnehmen Gummibärchen höhle der löwen ist eine natürliche Methode zur Gewichtsreduktion und Fettzellenverbrennung. Diese eine Option behält das allgemeine Gesundheitsszenario bei. Folgendes gibt es als Gegenleistung: ## Garcinia Combogia Die in der Substanz enthaltene Hydroxyzitronensäure verbessert den Stoffwechsel und wirkt effizient auf das Gewebe. Es unterdrückt außerdem den Appetit und maximiert die Fettverbrennung, sodass Sie auf natürliche Weise abnehmen können. ## Cla-Mischung Wenn die Verbesserung der Immunität und die Auslösung eines natürlichen Gewichtsmanagements im Körper unerlässlich werden, kann alles allein mit Abnehm Gummibärchen möglich werden. Für die Aufrechterhaltung des gesamten Körpergewichts würde man das wissenschaftliche Mittel wählen. Darüber hinaus ist eine Verstärkung der Immunität auch nicht sehr ungewöhnlich. Mit der gegebenen Ergänzung wird es im Körper des Benutzers sehr deutliche und positive Veränderungen geben. ## Grüner Tee Wenn Gewichtsverlust zu einer ernsten Angelegenheit wird, ist die Einführung von grünem Tee zur Beseitigung von Giftstoffen und überschüssigem Fett möglich. Grüner Tee ist der stärkende Inhaltsstoff Nummer eins, der unerwünschte Elemente aus dem Körper entfernt und Sie dauerhaft gesund hält. ## Löwenzahn Der Multivitamin-Inhaltsstoff hebt die Stimmung, reduziert oxidativen Stress und hilft auf natürliche Weise bei der Gewichtskontrolle. Es lohnt sich, dieser Substanz einen allgemeinen Nutzen zu verschaffen. ## Vorteile der Wahl von Abnehm Gummibärchen Bei der Bewertung der Vorteile der Wahl von Abnehm Gummibärchen konnten wir die folgenden Punkte feststellen. Folgendes bringt die Abhilfe für den Benutzer: Verwenden Sie die beste Nahrungskapsel und führen Sie mühelos einen gesunden Lebensstil ein ## **[Klicken Sie hier, um jetzt auf der offiziellen Website von Abnehm zu kaufen](https://adtocart.xyz/abnehm-de)**
{}
VKapseln475/Abnehm55
null
[ "region:us" ]
null
2024-04-20T05:57:42+00:00
[]
[]
TAGS #region-us
# Abnehm Gummibärchen Test Deutschland Erfahrungen Bewertung Preis, Kaufen Abnehm Gummibärchen Test Deutschland Neu ist die Idee vom Abnehm-Gummibärchen nicht. Ein Autor mit dem Pseudonym Adrian Janson behauptete bereits 2011 in seinem Buch "Die Gummibärchen-Diät - Abnehmen mit Bärenkraft", er habe dank Fruchtgummis vor jeder Hauptmahlzeit acht Kilo in drei Monaten abgenommen. Das Wichtigste der Diät sei die richtige Dosierung und die Zeit, in denen die Bärchen verdrückt werden müssten, so der Autor. ## Klicken Sie hier, um jetzt auf der offiziellen Website von Abnehm zu kaufen ## Welche Auswirkungen hat die Gewichtsabnahme durch Abnehm Gummibärchen? Abnehmen Gummibärchen Erfahrungen sorgt für einen besseren Stoffwechsel im gesamten Körper, so dass die Fettverbrennung konstant bleibt. Der Prozess der Thermogenese verbessert die Wärme im Inneren, so dass zusätzliche Fettablagerungen leicht geschmolzen werden können. Darüber hinaus gibt es eine Trennung des Hungers, um das Risiko von Fettleibigkeit zu verringern. Der verbesserte Stoffwechsel im Körper verringert das Verlangen nach ungesunden Snacks. Es steuert auch das Herz-Kreislauf-System und sorgt auf natürliche Weise für ein gesundes Gewicht. Wenn Benutzer wirklich weiter abnehmen möchten, auch wenn sie nichts tun, ist die Wahl dieser High-End-Lösung eine großartige Option. Es würde Ihr gesamtes Herz-Kreislauf-System stärken und die Ketose für einen gesunden und schnellen Gewichtsverlust auslösen. ##Abnehm Gummibärchen-Zutaten zur Gewichtsreduktion vorgestellt Die in der Formel enthaltenen exklusiven Zusatzstoffe stärken den gesamten Körper und sorgen für Ergebnisse bei der Gewichtskontrolle. Die Verwendung von Abnehmen Gummibärchen höhle der löwen ist eine natürliche Methode zur Gewichtsreduktion und Fettzellenverbrennung. Diese eine Option behält das allgemeine Gesundheitsszenario bei. Folgendes gibt es als Gegenleistung: ## Garcinia Combogia Die in der Substanz enthaltene Hydroxyzitronensäure verbessert den Stoffwechsel und wirkt effizient auf das Gewebe. Es unterdrückt außerdem den Appetit und maximiert die Fettverbrennung, sodass Sie auf natürliche Weise abnehmen können. ## Cla-Mischung Wenn die Verbesserung der Immunität und die Auslösung eines natürlichen Gewichtsmanagements im Körper unerlässlich werden, kann alles allein mit Abnehm Gummibärchen möglich werden. Für die Aufrechterhaltung des gesamten Körpergewichts würde man das wissenschaftliche Mittel wählen. Darüber hinaus ist eine Verstärkung der Immunität auch nicht sehr ungewöhnlich. Mit der gegebenen Ergänzung wird es im Körper des Benutzers sehr deutliche und positive Veränderungen geben. ## Grüner Tee Wenn Gewichtsverlust zu einer ernsten Angelegenheit wird, ist die Einführung von grünem Tee zur Beseitigung von Giftstoffen und überschüssigem Fett möglich. Grüner Tee ist der stärkende Inhaltsstoff Nummer eins, der unerwünschte Elemente aus dem Körper entfernt und Sie dauerhaft gesund hält. ## Löwenzahn Der Multivitamin-Inhaltsstoff hebt die Stimmung, reduziert oxidativen Stress und hilft auf natürliche Weise bei der Gewichtskontrolle. Es lohnt sich, dieser Substanz einen allgemeinen Nutzen zu verschaffen. ## Vorteile der Wahl von Abnehm Gummibärchen Bei der Bewertung der Vorteile der Wahl von Abnehm Gummibärchen konnten wir die folgenden Punkte feststellen. Folgendes bringt die Abhilfe für den Benutzer: Verwenden Sie die beste Nahrungskapsel und führen Sie mühelos einen gesunden Lebensstil ein ## Klicken Sie hier, um jetzt auf der offiziellen Website von Abnehm zu kaufen
[ "# Abnehm Gummibärchen Test Deutschland Erfahrungen Bewertung Preis, Kaufen\n\nAbnehm Gummibärchen Test Deutschland Neu ist die Idee vom Abnehm-Gummibärchen nicht. Ein Autor mit dem Pseudonym Adrian Janson behauptete bereits 2011 in seinem Buch \"Die Gummibärchen-Diät - Abnehmen mit Bärenkraft\", er habe dank Fruchtgummis vor jeder Hauptmahlzeit acht Kilo in drei Monaten abgenommen. Das Wichtigste der Diät sei die richtige Dosierung und die Zeit, in denen die Bärchen verdrückt werden müssten, so der Autor.", "## Klicken Sie hier, um jetzt auf der offiziellen Website von Abnehm zu kaufen", "## Welche Auswirkungen hat die Gewichtsabnahme durch Abnehm Gummibärchen?\n\nAbnehmen Gummibärchen Erfahrungen sorgt für einen besseren Stoffwechsel im gesamten Körper, so dass die Fettverbrennung konstant bleibt. Der Prozess der Thermogenese verbessert die Wärme im Inneren, so dass zusätzliche Fettablagerungen leicht geschmolzen werden können. Darüber hinaus gibt es eine Trennung des Hungers, um das Risiko von Fettleibigkeit zu verringern. Der verbesserte Stoffwechsel im Körper verringert das Verlangen nach ungesunden Snacks. Es steuert auch das Herz-Kreislauf-System und sorgt auf natürliche Weise für ein gesundes Gewicht.\n\nWenn Benutzer wirklich weiter abnehmen möchten, auch wenn sie nichts tun, ist die Wahl dieser High-End-Lösung eine großartige Option. Es würde Ihr gesamtes Herz-Kreislauf-System stärken und die Ketose für einen gesunden und schnellen Gewichtsverlust auslösen.", "## Garcinia Combogia\nDie in der Substanz enthaltene Hydroxyzitronensäure verbessert den Stoffwechsel und wirkt effizient auf das Gewebe. Es unterdrückt außerdem den Appetit und maximiert die Fettverbrennung, sodass Sie auf natürliche Weise abnehmen können.", "## Cla-Mischung\nWenn die Verbesserung der Immunität und die Auslösung eines natürlichen Gewichtsmanagements im Körper unerlässlich werden, kann alles allein mit Abnehm Gummibärchen möglich werden. Für die Aufrechterhaltung des gesamten Körpergewichts würde man das wissenschaftliche Mittel wählen. Darüber hinaus ist eine Verstärkung der Immunität auch nicht sehr ungewöhnlich. Mit der gegebenen Ergänzung wird es im Körper des Benutzers sehr deutliche und positive Veränderungen geben.", "## Grüner Tee\nWenn Gewichtsverlust zu einer ernsten Angelegenheit wird, ist die Einführung von grünem Tee zur Beseitigung von Giftstoffen und überschüssigem Fett möglich. Grüner Tee ist der stärkende Inhaltsstoff Nummer eins, der unerwünschte Elemente aus dem Körper entfernt und Sie dauerhaft gesund hält.", "## Löwenzahn\nDer Multivitamin-Inhaltsstoff hebt die Stimmung, reduziert oxidativen Stress und hilft auf natürliche Weise bei der Gewichtskontrolle. Es lohnt sich, dieser Substanz einen allgemeinen Nutzen zu verschaffen.\n\n ## Vorteile der Wahl von Abnehm Gummibärchen\n\nBei der Bewertung der Vorteile der Wahl von Abnehm Gummibärchen konnten wir die folgenden Punkte feststellen. Folgendes bringt die Abhilfe für den Benutzer:\n\n Verwenden Sie die beste Nahrungskapsel und führen Sie mühelos einen gesunden Lebensstil ein\n\n ## Klicken Sie hier, um jetzt auf der offiziellen Website von Abnehm zu kaufen" ]
[ "TAGS\n#region-us \n", "# Abnehm Gummibärchen Test Deutschland Erfahrungen Bewertung Preis, Kaufen\n\nAbnehm Gummibärchen Test Deutschland Neu ist die Idee vom Abnehm-Gummibärchen nicht. Ein Autor mit dem Pseudonym Adrian Janson behauptete bereits 2011 in seinem Buch \"Die Gummibärchen-Diät - Abnehmen mit Bärenkraft\", er habe dank Fruchtgummis vor jeder Hauptmahlzeit acht Kilo in drei Monaten abgenommen. Das Wichtigste der Diät sei die richtige Dosierung und die Zeit, in denen die Bärchen verdrückt werden müssten, so der Autor.", "## Klicken Sie hier, um jetzt auf der offiziellen Website von Abnehm zu kaufen", "## Welche Auswirkungen hat die Gewichtsabnahme durch Abnehm Gummibärchen?\n\nAbnehmen Gummibärchen Erfahrungen sorgt für einen besseren Stoffwechsel im gesamten Körper, so dass die Fettverbrennung konstant bleibt. Der Prozess der Thermogenese verbessert die Wärme im Inneren, so dass zusätzliche Fettablagerungen leicht geschmolzen werden können. Darüber hinaus gibt es eine Trennung des Hungers, um das Risiko von Fettleibigkeit zu verringern. Der verbesserte Stoffwechsel im Körper verringert das Verlangen nach ungesunden Snacks. Es steuert auch das Herz-Kreislauf-System und sorgt auf natürliche Weise für ein gesundes Gewicht.\n\nWenn Benutzer wirklich weiter abnehmen möchten, auch wenn sie nichts tun, ist die Wahl dieser High-End-Lösung eine großartige Option. Es würde Ihr gesamtes Herz-Kreislauf-System stärken und die Ketose für einen gesunden und schnellen Gewichtsverlust auslösen.", "## Garcinia Combogia\nDie in der Substanz enthaltene Hydroxyzitronensäure verbessert den Stoffwechsel und wirkt effizient auf das Gewebe. Es unterdrückt außerdem den Appetit und maximiert die Fettverbrennung, sodass Sie auf natürliche Weise abnehmen können.", "## Cla-Mischung\nWenn die Verbesserung der Immunität und die Auslösung eines natürlichen Gewichtsmanagements im Körper unerlässlich werden, kann alles allein mit Abnehm Gummibärchen möglich werden. Für die Aufrechterhaltung des gesamten Körpergewichts würde man das wissenschaftliche Mittel wählen. Darüber hinaus ist eine Verstärkung der Immunität auch nicht sehr ungewöhnlich. Mit der gegebenen Ergänzung wird es im Körper des Benutzers sehr deutliche und positive Veränderungen geben.", "## Grüner Tee\nWenn Gewichtsverlust zu einer ernsten Angelegenheit wird, ist die Einführung von grünem Tee zur Beseitigung von Giftstoffen und überschüssigem Fett möglich. Grüner Tee ist der stärkende Inhaltsstoff Nummer eins, der unerwünschte Elemente aus dem Körper entfernt und Sie dauerhaft gesund hält.", "## Löwenzahn\nDer Multivitamin-Inhaltsstoff hebt die Stimmung, reduziert oxidativen Stress und hilft auf natürliche Weise bei der Gewichtskontrolle. Es lohnt sich, dieser Substanz einen allgemeinen Nutzen zu verschaffen.\n\n ## Vorteile der Wahl von Abnehm Gummibärchen\n\nBei der Bewertung der Vorteile der Wahl von Abnehm Gummibärchen konnten wir die folgenden Punkte feststellen. Folgendes bringt die Abhilfe für den Benutzer:\n\n Verwenden Sie die beste Nahrungskapsel und führen Sie mühelos einen gesunden Lebensstil ein\n\n ## Klicken Sie hier, um jetzt auf der offiziellen Website von Abnehm zu kaufen" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Grayx/sad_llama_6](https://huggingface.co/Grayx/sad_llama_6) as a base. ### Models Merged The following models were included in the merge: * [cloudyu/Meta-Llama-3-8B-Instruct-DPO](https://huggingface.co/cloudyu/Meta-Llama-3-8B-Instruct-DPO) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: cloudyu/Meta-Llama-3-8B-Instruct-DPO parameters: density: 0.5 weight: 0.5 - model: Grayx/sad_llama_6 parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: Grayx/sad_llama_6 parameters: normalize: false int8_mask: true dtype: float16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["cloudyu/Meta-Llama-3-8B-Instruct-DPO", "Grayx/sad_llama_6"]}
allknowingroger/Llam3merge4
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:cloudyu/Meta-Llama-3-8B-Instruct-DPO", "base_model:Grayx/sad_llama_6", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T05:59:37+00:00
[ "2306.01708" ]
[]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2306.01708 #base_model-cloudyu/Meta-Llama-3-8B-Instruct-DPO #base_model-Grayx/sad_llama_6 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the TIES merge method using Grayx/sad_llama_6 as a base. ### Models Merged The following models were included in the merge: * cloudyu/Meta-Llama-3-8B-Instruct-DPO ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the TIES merge method using Grayx/sad_llama_6 as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* cloudyu/Meta-Llama-3-8B-Instruct-DPO", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #arxiv-2306.01708 #base_model-cloudyu/Meta-Llama-3-8B-Instruct-DPO #base_model-Grayx/sad_llama_6 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the TIES merge method using Grayx/sad_llama_6 as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* cloudyu/Meta-Llama-3-8B-Instruct-DPO", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
text-generation
transformers
<img src=https://huggingface.co/lodrick-the-lafted/Copus-2x8B/resolve/main/copus.png> MoE'd up: - [dreamgen/opus-v1.2-llama-3-8b](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b) - [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
{"license": "llama2"}
blockblockblock/Copus-2x8B-bpw3.5
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T05:59:38+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src=URL MoE'd up: - dreamgen/opus-v1.2-llama-3-8b - NousResearch/Meta-Llama-3-8B-Instruct_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
[]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NDD-dimeshift_test-content_tags This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4229 - Accuracy: 0.9098 - F1: 0.8904 - Precision: 0.8805 - Recall: 0.9098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.1247 | 0.9989 | 669 | 0.4317 | 0.9105 | 0.8906 | 0.8809 | 0.9105 | | 0.1103 | 1.9978 | 1338 | 0.4229 | 0.9098 | 0.8904 | 0.8805 | 0.9098 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "NDD-dimeshift_test-content_tags", "results": []}]}
lgk03/NDD-dimeshift_test-content_tags
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T06:00:09+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
NDD-dimeshift\_test-content\_tags ================================= This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.4229 * Accuracy: 0.9098 * F1: 0.8904 * Precision: 0.8805 * Recall: 0.9098 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
null
## Exllama v2 Quantizations of opus-v1.2-llama-3-8b Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>text ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/opus-v1.2-llama-3-8b-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/opus-v1.2-llama-3-8b-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/opus-v1.2-llama-3-8b-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/opus-v1.2-llama-3-8b-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/opus-v1.2-llama-3-8b-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/opus-v1.2-llama-3-8b-exl2 opus-v1.2-llama-3-8b-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/opus-v1.2-llama-3-8b-exl2 --revision 6_5 --local-dir opus-v1.2-llama-3-8b-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/opus-v1.2-llama-3-8b-exl2 --revision 6_5 --local-dir opus-v1.2-llama-3-8b-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "tags": ["unsloth", "axolotl"], "pipeline_tag": "text-generation", "quantized_by": "bartowski"}
bartowski/opus-v1.2-llama-3-8b-exl2
null
[ "unsloth", "axolotl", "text-generation", "en", "license:cc-by-nc-nd-4.0", "region:us" ]
null
2024-04-20T06:07:31+00:00
[]
[ "en" ]
TAGS #unsloth #axolotl #text-generation #en #license-cc-by-nc-nd-4.0 #region-us
Exllama v2 Quantizations of opus-v1.2-llama-3-8b ------------------------------------------------ Using <a href="URL ExLlamaV2 v0.0.19 for quantization. **The "main" branch only contains the URL, download one of the other branches for the model (see below)** Each branch contains an individual bits per weight, with the main one containing only the URL for further conversions. Original model: URL Prompt format ------------- Available sizes --------------- Download instructions --------------------- With git: With huggingface hub (credit to TheBloke for instructions): To download a specific branch, use the '--revision' parameter. For example, to download the 6.5 bpw branch: Linux: Windows (which apparently doesn't like \_ in folders sometimes?): Want to support my work? Visit my ko-fi page here: URL
[]
[ "TAGS\n#unsloth #axolotl #text-generation #en #license-cc-by-nc-nd-4.0 #region-us \n" ]
null
peft
![](https://raw.githubusercontent.com/saucam/models/main/llama-aero.png) # llama-airo-3 [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ## Details This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the jondurbin/airoboros-3.2 dataset. It achieves the following results on the evaluation set: - Loss: 0.8437 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1845 | 0.0 | 1 | 1.1821 | | 0.9328 | 0.25 | 114 | 0.9228 | | 0.8961 | 0.5 | 228 | 0.8713 | | 0.824 | 0.75 | 342 | 0.8437 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.1.2+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0 ## Eval Results |Benchmark| Model |agieval|gpt4all|bigbench|truthfulqa|Average| |---------|----------------------------------------------------------|------:|------:|-------:|---------:|------:| |nous |[llama-airo-3](https://huggingface.co/saucam/llama-airo-3)| 36.59| 72.24| 39.26| 56.3| 51.1| |nous|[meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)|31.1|69.95|36.7|43.91|45.42| |Benchmark| Model |winogrande| arc |gsm8k|mmlu |truthfulqa|hellaswag|Average| |---------|----------------------------------------------------------|---------:|----:|----:|----:|---------:|--------:|------:| |openllm |[llama-airo-3](https://huggingface.co/saucam/llama-airo-3)| 78.22|61.01|56.33|64.79| 56.35| 82.42| 66.52| |openllm |[Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)| 77.58|57.51|50.87|65.04| 43.93| 82.09| 62.84| Detailed Results: https://github.com/saucam/model_evals/tree/main/saucam/llama-airo-3
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "datasets": ["jondurbin/airoboros-3.2"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "out", "results": []}]}
saucam/llama-airo-3
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "dataset:jondurbin/airoboros-3.2", "base_model:meta-llama/Meta-Llama-3-8B", "license:apache-2.0", "region:us" ]
null
2024-04-20T06:08:43+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #dataset-jondurbin/airoboros-3.2 #base_model-meta-llama/Meta-Llama-3-8B #license-apache-2.0 #region-us
![](URL llama-airo-3 ============ <img src="URL alt="Built with Axolotl" width="200" height="32"/> Details ------- This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the jondurbin/airoboros-3.2 dataset. It achieves the following results on the evaluation set: * Loss: 0.8437 Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 10 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.0.dev0 * Pytorch 2.1.2+cu118 * Datasets 2.15.0 * Tokenizers 0.15.0 Eval Results ------------ Detailed Results: URL
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.1.2+cu118\n* Datasets 2.15.0\n* Tokenizers 0.15.0\n\n\nEval Results\n------------\n\n\n\n\nDetailed Results: URL" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #dataset-jondurbin/airoboros-3.2 #base_model-meta-llama/Meta-Llama-3-8B #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.1.2+cu118\n* Datasets 2.15.0\n* Tokenizers 0.15.0\n\n\nEval Results\n------------\n\n\n\n\nDetailed Results: URL" ]
text-generation
transformers
# [MaziyarPanahi/Llama-3-Smaug-8B-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-Smaug-8B-GGUF) - Model creator: [abacusai](https://huggingface.co/abacusai) - Original model: [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B) ## Description [MaziyarPanahi/Llama-3-Smaug-8B-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-Smaug-8B-GGUF) contains GGUF format model files for [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B). ## How to use ## Load GGUF models You `MUST` follow the prompt template provided by Llama-3: ```sh ./llama.cpp/main -m Llama-3-Smaug-8B.Q2_K.gguf -r '<|eot_id|>' --in-prefix "\n<|start_header_id|>user<|end_header_id|>\n\n" --in-suffix "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" -p "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\nHi! How are you?<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>\n\n" -n 1024 ``` ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
{"tags": ["quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "mixtral", "text-generation"], "model_name": "Llama-3-Smaug-8B-GGUF", "base_model": "abacusai/Llama-3-Smaug-8B", "inference": false, "model_creator": "abacusai", "pipeline_tag": "text-generation", "quantized_by": "MaziyarPanahi"}
MaziyarPanahi/Llama-3-Smaug-8B-GGUF
null
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "mixtral", "base_model:abacusai/Llama-3-Smaug-8B", "text-generation-inference", "region:us" ]
null
2024-04-20T06:09:07+00:00
[]
[]
TAGS #transformers #gguf #mistral #quantized #2-bit #3-bit #4-bit #5-bit #6-bit #8-bit #GGUF #text-generation #mixtral #base_model-abacusai/Llama-3-Smaug-8B #text-generation-inference #region-us
# MaziyarPanahi/Llama-3-Smaug-8B-GGUF - Model creator: abacusai - Original model: abacusai/Llama-3-Smaug-8B ## Description MaziyarPanahi/Llama-3-Smaug-8B-GGUF contains GGUF format model files for abacusai/Llama-3-Smaug-8B. ## How to use ## Load GGUF models You 'MUST' follow the prompt template provided by Llama-3: ### About GGUF GGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL. Here is an incomplete list of clients and libraries that are known to support GGUF: * URL. The source project for GGUF. Offers a CLI and a server option. * text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection. * URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use. * ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
[ "# MaziyarPanahi/Llama-3-Smaug-8B-GGUF\n- Model creator: abacusai\n- Original model: abacusai/Llama-3-Smaug-8B", "## Description\nMaziyarPanahi/Llama-3-Smaug-8B-GGUF contains GGUF format model files for abacusai/Llama-3-Smaug-8B.", "## How to use", "## Load GGUF models\n\nYou 'MUST' follow the prompt template provided by Llama-3:", "### About GGUF\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models." ]
[ "TAGS\n#transformers #gguf #mistral #quantized #2-bit #3-bit #4-bit #5-bit #6-bit #8-bit #GGUF #text-generation #mixtral #base_model-abacusai/Llama-3-Smaug-8B #text-generation-inference #region-us \n", "# MaziyarPanahi/Llama-3-Smaug-8B-GGUF\n- Model creator: abacusai\n- Original model: abacusai/Llama-3-Smaug-8B", "## Description\nMaziyarPanahi/Llama-3-Smaug-8B-GGUF contains GGUF format model files for abacusai/Llama-3-Smaug-8B.", "## How to use", "## Load GGUF models\n\nYou 'MUST' follow the prompt template provided by Llama-3:", "### About GGUF\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml adapter: qlora base_model: meta-llama/Meta-Llama-3-70B-Instruct bf16: auto datasets: - conversation: llama-3 path: a265546be8c24d59bfdc6ba69431b635/./data/with_function_response/original_clean/function_used_training_shuffled.jsonl type: sharegpt - conversation: llama-3 path: a265546be8c24d59bfdc6ba69431b635/./data/with_function_response/original_clean/function_not_used_training.jsonl type: sharegpt - conversation: llama-3 path: a265546be8c24d59bfdc6ba69431b635/./data/with_function_response/parallel_call/parallel_data_training.jsonl type: sharegpt debug: null deepspeed: null early_stopping_patience: null eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: - full_shard - auto_wrap fsdp_config: fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_cpu_ram_efficient_loading: true fsdp_limit_all_gathers: true fsdp_offload_params: true fsdp_sharding_strategy: FULL_SHARD fsdp_state_dict_type: FULL_STATE_DICT fsdp_sync_module_states: true fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer fsdp_use_orig_params: false gradient_accumulation_steps: 2 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true group_by_length: false hub_model_id: liuylhf/empower-functions-llama3-70b-parallel-all-linear learning_rate: 0.0002 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lora_target_modules: null lr_scheduler: cosine micro_batch_size: 4 model_type: LlamaForCausalLM num_epochs: 4 optimizer: adamw_torch output_dir: a265546be8c24d59bfdc6ba69431b635/model pad_to_sequence_len: true resume_from_checkpoint: null sample_packing: true saves_per_epoch: 10 sequence_len: 4096 special_tokens: pad_token: <|end_of_text|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false val_set_size: 0.05 wandb_entity: null wandb_log_model: null wandb_name: null wandb_project: null wandb_watch: null warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # empower-functions-llama3-70b-parallel-all-linear This model is a fine-tuned version of [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0436 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0962 | 0.0067 | 1 | 2.0635 | | 0.0715 | 0.2492 | 37 | 0.0770 | | 0.0556 | 0.4983 | 74 | 0.0600 | | 0.0559 | 0.7475 | 111 | 0.0549 | | 0.0542 | 0.9966 | 148 | 0.0523 | | 0.0439 | 1.2256 | 185 | 0.0505 | | 0.0484 | 1.4747 | 222 | 0.0496 | | 0.043 | 1.7239 | 259 | 0.0477 | | 0.0467 | 1.9731 | 296 | 0.0464 | | 0.0406 | 2.2020 | 333 | 0.0462 | | 0.0424 | 2.4512 | 370 | 0.0453 | | 0.0378 | 2.7003 | 407 | 0.0443 | | 0.0382 | 2.9495 | 444 | 0.0435 | | 0.0352 | 3.1785 | 481 | 0.0439 | | 0.0328 | 3.4276 | 518 | 0.0438 | | 0.0329 | 3.6768 | 555 | 0.0437 | | 0.0378 | 3.9259 | 592 | 0.0436 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["axolotl", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "model-index": [{"name": "empower-functions-llama3-70b-parallel-all-linear", "results": []}]}
empower-dev-staging/empower-functions-llama3-70b-parallel-all-linear
null
[ "peft", "tensorboard", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-70B-Instruct", "license:other", "4-bit", "region:us" ]
null
2024-04-20T06:09:37+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #llama #axolotl #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-other #4-bit #region-us
<img src="URL alt="Built with Axolotl" width="200" height="32"/> See axolotl config axolotl version: '0.4.0' empower-functions-llama3-70b-parallel-all-linear ================================================ This model is a fine-tuned version of meta-llama/Meta-Llama-3-70B-Instruct on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.0436 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 4 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * total\_eval\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 10 * num\_epochs: 4 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.0 * Pytorch 2.1.2+cu121 * Datasets 2.15.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.1.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #llama #axolotl #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-other #4-bit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.1.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.19.1" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: binganao/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
binganao/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-04-20T06:10:14+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: binganao/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: binganao/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: binganao/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
text-classification
setfit
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [bsen26/eyeR-classification-multi-label-category2](https://huggingface.co/datasets/bsen26/eyeR-classification-multi-label-category2) dataset that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a OneVsRestClassifier instance - **Maximum Sequence Length:** 512 tokens <!-- - **Number of Classes:** Unknown --> - **Training Dataset:** [bsen26/eyeR-classification-multi-label-category2](https://huggingface.co/datasets/bsen26/eyeR-classification-multi-label-category2) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.5431 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("bsen26/eyeR-category2-multilabel") # Run inference preds = model("they gave me the wrong toy :(") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 1 | 18.3203 | 41 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0010 | 1 | 0.2092 | - | | 0.0521 | 50 | 0.2022 | - | | 0.1042 | 100 | 0.11 | - | | 0.1562 | 150 | 0.1034 | - | | 0.2083 | 200 | 0.029 | - | | 0.2604 | 250 | 0.0529 | - | | 0.3125 | 300 | 0.0386 | - | | 0.3646 | 350 | 0.0104 | - | | 0.4167 | 400 | 0.0166 | - | | 0.4688 | 450 | 0.0129 | - | | 0.5208 | 500 | 0.0071 | - | | 0.5729 | 550 | 0.0459 | - | | 0.625 | 600 | 0.0062 | - | | 0.6771 | 650 | 0.0337 | - | | 0.7292 | 700 | 0.0142 | - | | 0.7812 | 750 | 0.0084 | - | | 0.8333 | 800 | 0.0096 | - | | 0.8854 | 850 | 0.0057 | - | | 0.9375 | 900 | 0.015 | - | | 0.9896 | 950 | 0.0049 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.38.2 - PyTorch: 2.2.1+cu121 - Datasets: 2.19.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "datasets": ["bsen26/eyeR-classification-multi-label-category2"], "metrics": ["accuracy"], "base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "widget": [{"text": "this mcdonalds dont give ketchup"}, {"text": "Had some missing items from my order"}, {"text": "they gave me the wrong toy :("}, {"text": "We have ordered large fries but pang regular lang laman nya swear nakakadisappoint ??"}, {"text": "There was missing item from my order"}], "pipeline_tag": "text-classification", "inference": false, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "bsen26/eyeR-classification-multi-label-category2", "type": "bsen26/eyeR-classification-multi-label-category2", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.5431034482758621, "name": "Accuracy"}]}]}]}
bsen26/eyeR-category2-multilabel
null
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "dataset:bsen26/eyeR-classification-multi-label-category2", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
null
2024-04-20T06:11:11+00:00
[ "2209.11055" ]
[]
TAGS #setfit #safetensors #mpnet #sentence-transformers #text-classification #generated_from_setfit_trainer #dataset-bsen26/eyeR-classification-multi-label-category2 #arxiv-2209.11055 #base_model-sentence-transformers/paraphrase-mpnet-base-v2 #model-index #region-us
SetFit with sentence-transformers/paraphrase-mpnet-base-v2 ========================================================== This is a SetFit model trained on the bsen26/eyeR-classification-multi-label-category2 dataset that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-mpnet-base-v2 as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a Sentence Transformer with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. Model Details ------------- ### Model Description * Model Type: SetFit * Sentence Transformer body: sentence-transformers/paraphrase-mpnet-base-v2 * Classification head: a OneVsRestClassifier instance * Maximum Sequence Length: 512 tokens * Training Dataset: bsen26/eyeR-classification-multi-label-category2 ### Model Sources * Repository: SetFit on GitHub * Paper: Efficient Few-Shot Learning Without Prompts * Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts Evaluation ---------- ### Metrics Uses ---- ### Direct Use for Inference First install the SetFit library: Then you can load this model and run inference. Training Details ---------------- ### Training Set Metrics ### Training Hyperparameters * batch\_size: (16, 16) * num\_epochs: (1, 1) * max\_steps: -1 * sampling\_strategy: oversampling * num\_iterations: 20 * body\_learning\_rate: (2e-05, 2e-05) * head\_learning\_rate: 2e-05 * loss: CosineSimilarityLoss * distance\_metric: cosine\_distance * margin: 0.25 * end\_to\_end: False * use\_amp: False * warmup\_proportion: 0.1 * seed: 42 * eval\_max\_steps: -1 * load\_best\_model\_at\_end: False ### Training Results ### Framework Versions * Python: 3.10.12 * SetFit: 1.0.3 * Sentence Transformers: 2.7.0 * Transformers: 4.38.2 * PyTorch: 2.2.1+cu121 * Datasets: 2.19.0 * Tokenizers: 0.15.2 ### BibTeX
[ "### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/paraphrase-mpnet-base-v2\n* Classification head: a OneVsRestClassifier instance\n* Maximum Sequence Length: 512 tokens\n* Training Dataset: bsen26/eyeR-classification-multi-label-category2", "### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts\n\n\nEvaluation\n----------", "### Metrics\n\n\n\nUses\n----", "### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------", "### Training Set Metrics", "### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (1, 1)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* num\\_iterations: 20\n* body\\_learning\\_rate: (2e-05, 2e-05)\n* head\\_learning\\_rate: 2e-05\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False", "### Training Results", "### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.38.2\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.15.2", "### BibTeX" ]
[ "TAGS\n#setfit #safetensors #mpnet #sentence-transformers #text-classification #generated_from_setfit_trainer #dataset-bsen26/eyeR-classification-multi-label-category2 #arxiv-2209.11055 #base_model-sentence-transformers/paraphrase-mpnet-base-v2 #model-index #region-us \n", "### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/paraphrase-mpnet-base-v2\n* Classification head: a OneVsRestClassifier instance\n* Maximum Sequence Length: 512 tokens\n* Training Dataset: bsen26/eyeR-classification-multi-label-category2", "### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts\n\n\nEvaluation\n----------", "### Metrics\n\n\n\nUses\n----", "### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------", "### Training Set Metrics", "### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (1, 1)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* num\\_iterations: 20\n* body\\_learning\\_rate: (2e-05, 2e-05)\n* head\\_learning\\_rate: 2e-05\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False", "### Training Results", "### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.38.2\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.15.2", "### BibTeX" ]
text-generation
transformers
AI Model Name: Llama 3 70B "Built with Meta Llama 3" https://llama.meta.com/llama3/license/ How to quantize 70B model so it will fit on 2x4090 GPUs: I tried EXL2, AutoAWQ, and SqueezeLLM and they all failed for different reasons (issues opened). HQQ worked: I rented a 4x GPU 1TB RAM ($19/hr) instance on runpod with 1024GB container and 1024GB workspace disk space. I think you only need 2x GPU with 80GB VRAM and 512GB+ system RAM so probably overpaid. Note you need to fill in the form to get access to the 70B Meta weights. You can copy/paste this on the console and it will just set up everything automatically: ```bash apt update apt install git-lfs vim -y mkdir -p ~/miniconda3 wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 ~/miniconda3/bin/conda init bash source ~/.bashrc conda create -n hqq python=3.10 -y && conda activate hqq git lfs install git clone https://github.com/mobiusml/hqq.git cd hqq pip install torch pip install . pip install huggingface_hub[hf_transfer] export HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli login ``` Create `quantize.py` file by copy/pasting this into console: ``` echo " import torch model_id = 'meta-llama/Meta-Llama-3-70B-Instruct' save_dir = 'cat-llama-3-70b-hqq' compute_dtype = torch.bfloat16 from hqq.core.quantize import * quant_config = BaseQuantizeConfig(nbits=4, group_size=64, offload_meta=True) zero_scale_group_size = 128 quant_config['scale_quant_params']['group_size'] = zero_scale_group_size quant_config['zero_quant_params']['group_size'] = zero_scale_group_size from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer model = HQQModelForCausalLM.from_pretrained(model_id) from hqq.models.hf.base import AutoHQQHFModel AutoHQQHFModel.quantize_model(model, quant_config=quant_config, compute_dtype=compute_dtype) AutoHQQHFModel.save_quantized(model, save_dir) model = AutoHQQHFModel.from_quantized(save_dir) model.eval() " > quantize.py ``` Run script: ``` python quantize.py ```
{}
catid/cat-llama-3-70b-hqq
null
[ "transformers", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T06:11:26+00:00
[]
[]
TAGS #transformers #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
AI Model Name: Llama 3 70B "Built with Meta Llama 3" URL How to quantize 70B model so it will fit on 2x4090 GPUs: I tried EXL2, AutoAWQ, and SqueezeLLM and they all failed for different reasons (issues opened). HQQ worked: I rented a 4x GPU 1TB RAM ($19/hr) instance on runpod with 1024GB container and 1024GB workspace disk space. I think you only need 2x GPU with 80GB VRAM and 512GB+ system RAM so probably overpaid. Note you need to fill in the form to get access to the 70B Meta weights. You can copy/paste this on the console and it will just set up everything automatically: Create 'URL' file by copy/pasting this into console: Run script:
[]
[ "TAGS\n#transformers #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
# Konosuba gemma 7B this is an intermidate model, it's finetuned with all konosuba script so i can build a better finetune later, i uploaded it because it could be relevant to someone ## Model Details - **Model Name:** Gemma7B-konosuba - **Architecture:** Gemma 7B - **Training Format:** unsloth/gemma-7b-it-bnb-4bit - **Version:** 1.0.0 To use this model in your projects, you can follow these steps: # if you want my notebook to check out how i did it, you can go to my [github!](https://github.com/wirytiox/Unsloth-wiry-training-suit) <img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made%20with%20unsloth.png" alt="Alt text" width="200"/>
{"language": ["en"], "license": "apache-2.0"}
wirytiox/Gemma7B-konosuba
null
[ "transformers", "safetensors", "gguf", "gemma", "text-generation", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T06:15:46+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #gguf #gemma #text-generation #conversational #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Konosuba gemma 7B this is an intermidate model, it's finetuned with all konosuba script so i can build a better finetune later, i uploaded it because it could be relevant to someone ## Model Details - Model Name: Gemma7B-konosuba - Architecture: Gemma 7B - Training Format: unsloth/gemma-7b-it-bnb-4bit - Version: 1.0.0 To use this model in your projects, you can follow these steps: # if you want my notebook to check out how i did it, you can go to my github! <img src="URL alt="Alt text" width="200"/>
[ "# Konosuba gemma 7B\nthis is an intermidate model, it's finetuned with all konosuba script so i can build a better finetune later, i uploaded it because it could be relevant to someone", "## Model Details\n\n- Model Name: Gemma7B-konosuba\n- Architecture: Gemma 7B\n- Training Format: unsloth/gemma-7b-it-bnb-4bit\n- Version: 1.0.0\n\nTo use this model in your projects, you can follow these steps:", "# if you want my notebook to check out how i did it, you can go to my github!\n\n<img src=\"URL alt=\"Alt text\" width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #gguf #gemma #text-generation #conversational #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Konosuba gemma 7B\nthis is an intermidate model, it's finetuned with all konosuba script so i can build a better finetune later, i uploaded it because it could be relevant to someone", "## Model Details\n\n- Model Name: Gemma7B-konosuba\n- Architecture: Gemma 7B\n- Training Format: unsloth/gemma-7b-it-bnb-4bit\n- Version: 1.0.0\n\nTo use this model in your projects, you can follow these steps:", "# if you want my notebook to check out how i did it, you can go to my github!\n\n<img src=\"URL alt=\"Alt text\" width=\"200\"/>" ]
text-generation
mlx
# lucataco/Meta-Llama-3-70B-Instruct-4bit This model was converted to MLX format from [`meta-llama/Meta-Llama-3-70B-Instruct`]() using mlx-lm version **0.10.0**. Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("lucataco/Meta-Llama-3-70B-Instruct-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "mlx"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit", "widget": [{"example_title": "Winter holidays", "messages": [{"role": "system", "content": "You are a helpful and honest assistant. Please, respond concisely and truthfully."}, {"role": "user", "content": "Can you recommend a good destination for Winter holidays?"}]}, {"example_title": "Programming assistant", "messages": [{"role": "system", "content": "You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully."}, {"role": "user", "content": "Write a function that computes the nth fibonacci number."}]}], "inference": {"parameters": {"max_new_tokens": 300, "stop": ["<|end_of_text|>", "<|eot_id|>"]}}}
lucataco/Meta-Llama-3-70B-Instruct-4bit
null
[ "mlx", "safetensors", "llama", "facebook", "meta", "pytorch", "llama-3", "text-generation", "conversational", "en", "license:other", "region:us" ]
null
2024-04-20T06:17:42+00:00
[]
[ "en" ]
TAGS #mlx #safetensors #llama #facebook #meta #pytorch #llama-3 #text-generation #conversational #en #license-other #region-us
# lucataco/Meta-Llama-3-70B-Instruct-4bit This model was converted to MLX format from ['meta-llama/Meta-Llama-3-70B-Instruct']() using mlx-lm version 0.10.0. Refer to the original model card for more details on the model. ## Use with mlx
[ "# lucataco/Meta-Llama-3-70B-Instruct-4bit\nThis model was converted to MLX format from ['meta-llama/Meta-Llama-3-70B-Instruct']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #llama #facebook #meta #pytorch #llama-3 #text-generation #conversational #en #license-other #region-us \n", "# lucataco/Meta-Llama-3-70B-Instruct-4bit\nThis model was converted to MLX format from ['meta-llama/Meta-Llama-3-70B-Instruct']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
null
transformers
# Uploaded model - **Developed by:** ntvcie - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-2b-bnb-4bit"}
ntvcie/Gemma2bVinhntV6_16bit
null
[ "transformers", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-2b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-20T06:18:18+00:00
[]
[ "en" ]
TAGS #transformers #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: ntvcie - License: apache-2.0 - Finetuned from model : unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: ntvcie\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: ntvcie\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
peft
# Model Card for Haggstrom ![](https://huggingface.co/KaraKaraWitch/Haggstrom-Test/resolve/main/Haggstrom.png?download=true) Haggstrom is an experimental & untested qlora trained on Llama 3. As for why? This is actually my first time training with axolotl. So I wanted to try some dataset I have on hand. <!-- Provide a quick summary of what the model is/does. --> ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "datasets": ["KaraKaraWitch/AnimeSubtitle"], "base_model": "NousResearch/Meta-Llama-3-8B"}
KaraKaraWitch/Haggstrom-Test
null
[ "peft", "safetensors", "llama", "dataset:KaraKaraWitch/AnimeSubtitle", "base_model:NousResearch/Meta-Llama-3-8B", "4-bit", "region:us" ]
null
2024-04-20T06:20:23+00:00
[]
[]
TAGS #peft #safetensors #llama #dataset-KaraKaraWitch/AnimeSubtitle #base_model-NousResearch/Meta-Llama-3-8B #4-bit #region-us
# Model Card for Haggstrom ![](URL Haggstrom is an experimental & untested qlora trained on Llama 3. As for why? This is actually my first time training with axolotl. So I wanted to try some dataset I have on hand. ### Framework versions - PEFT 0.10.0
[ "# Model Card for Haggstrom\n\n![](URL\n\nHaggstrom is an experimental & untested qlora trained on Llama 3.\n\nAs for why? This is actually my first time training with axolotl. So I wanted to try some dataset I have on hand.", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #llama #dataset-KaraKaraWitch/AnimeSubtitle #base_model-NousResearch/Meta-Llama-3-8B #4-bit #region-us \n", "# Model Card for Haggstrom\n\n![](URL\n\nHaggstrom is an experimental & untested qlora trained on Llama 3.\n\nAs for why? This is actually my first time training with axolotl. So I wanted to try some dataset I have on hand.", "### Framework versions\n\n- PEFT 0.10.0" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon-7b-sharded-bf16-finetuned-HPE This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 320 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "ybelkada/falcon-7b-sharded-bf16", "model-index": [{"name": "falcon-7b-sharded-bf16-finetuned-HPE", "results": []}]}
Aditi25/falcon-7b-sharded-bf16-finetuned-HPE
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:ybelkada/falcon-7b-sharded-bf16", "region:us" ]
null
2024-04-20T06:23:26+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-ybelkada/falcon-7b-sharded-bf16 #region-us
# falcon-7b-sharded-bf16-finetuned-HPE This model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 320 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# falcon-7b-sharded-bf16-finetuned-HPE\n\nThis model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 320", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-ybelkada/falcon-7b-sharded-bf16 #region-us \n", "# falcon-7b-sharded-bf16-finetuned-HPE\n\nThis model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 320", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
<img src=https://huggingface.co/lodrick-the-lafted/Copus-2x8B/resolve/main/copus.png> MoE'd up: - [dreamgen/opus-v1.2-llama-3-8b](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b) - [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
{"license": "llama2"}
blockblockblock/Copus-2x8B-bpw3.7
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T06:23:36+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src=URL MoE'd up: - dreamgen/opus-v1.2-llama-3-8b - NousResearch/Meta-Llama-3-8B-Instruct_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
[]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="yunkimmy/taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
yunkimmy/taxi
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-20T06:24:35+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Vignav/llama-2-7b-cars-v3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.IQ3_XS.gguf) | IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.IQ3_M.gguf) | IQ3_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "base_model": "Vignav/llama-2-7b-cars-v3", "quantized_by": "mradermacher"}
mradermacher/llama-2-7b-cars-v3-GGUF
null
[ "transformers", "gguf", "en", "base_model:Vignav/llama-2-7b-cars-v3", "endpoints_compatible", "region:us" ]
null
2024-04-20T06:25:35+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-Vignav/llama-2-7b-cars-v3 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-Vignav/llama-2-7b-cars-v3 #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
IntervitensInc/intv_l3_mk5
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T06:27:22+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Question_Generation_ComQ_16 This model is a fine-tuned version of [Gayathri142214002/Question_Generation_ComQ_15](https://huggingface.co/Gayathri142214002/Question_Generation_ComQ_15) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1983 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.164 | 0.78 | 100 | 0.1533 | | 0.1544 | 1.55 | 200 | 0.1668 | | 0.1493 | 2.33 | 300 | 0.1790 | | 0.1461 | 3.1 | 400 | 0.1849 | | 0.1409 | 3.88 | 500 | 0.1900 | | 0.135 | 4.65 | 600 | 0.1935 | | 0.1329 | 5.43 | 700 | 0.1950 | | 0.1321 | 6.2 | 800 | 0.1978 | | 0.1268 | 6.98 | 900 | 0.1983 | ### Framework versions - Transformers 4.39.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "Gayathri142214002/Question_Generation_ComQ_15", "model-index": [{"name": "Question_Generation_ComQ_16", "results": []}]}
Gayathri142214002/Question_Generation_ComQ_16
null
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Gayathri142214002/Question_Generation_ComQ_15", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T06:28:21+00:00
[]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-Gayathri142214002/Question_Generation_ComQ_15 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Question\_Generation\_ComQ\_16 ============================== This model is a fine-tuned version of Gayathri142214002/Question\_Generation\_ComQ\_15 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1983 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 7 ### Training results ### Framework versions * Transformers 4.39.2 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 7", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-Gayathri142214002/Question_Generation_ComQ_15 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 7", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # idefics2-8b-docvqa-finetuned-tutorial This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceM4/idefics2-8b", "model-index": [{"name": "idefics2-8b-docvqa-finetuned-tutorial", "results": []}]}
gK29382231121/idefics2-8b-docvqa-finetuned-tutorial
null
[ "safetensors", "generated_from_trainer", "base_model:HuggingFaceM4/idefics2-8b", "license:apache-2.0", "region:us" ]
null
2024-04-20T06:28:30+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us
# idefics2-8b-docvqa-finetuned-tutorial This model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# idefics2-8b-docvqa-finetuned-tutorial\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us \n", "# idefics2-8b-docvqa-finetuned-tutorial\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
text-generation
transformers
# **BLOSSOM-v5-llama3-8b** [💻Github](https://github.com/Azure99/BlossomLM) • [🚀Blossom Chat Demo](https://blossom-chat.com/) ### What's new? The Blossom V5 series models is fully trained using high-quality data distilled from gpt-4-0125-preview, resulting in significant improvements. ### Introduction Blossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Meta-Llama-3-8B pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source. Training was conducted in two stages. The first stage used 40K Wizard, 40K Orca, 10K Math single-turn instruction datasets, training for 1 epoch; the second stage used 10K Blossom chat multi-turn dialogue dataset, and 10% randomly sampled data from the first stage, training for 3 epochs. ### Inference Inference is performed in the form of dialogue continuation. Single-turn dialogue ``` A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions. |Human|: hello |Bot|: ``` Multi-turn dialogue ``` A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions. |Human|: hello |Bot|: Hello! How can I assist you today?<|end_of_text|> |Human|: Generate a random number using python |Bot|: ``` Note: At the end of the Bot's output in the historical conversation, append a `<|end_of_text|>`.
{"language": ["zh", "en"], "license": "apache-2.0", "datasets": ["Azure99/blossom-chat-v3", "Azure99/blossom-math-v4", "Azure99/blossom-wizard-v3", "Azure99/blossom-orca-v3"]}
Azure99/blossom-v5-llama3-8b
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "zh", "en", "dataset:Azure99/blossom-chat-v3", "dataset:Azure99/blossom-math-v4", "dataset:Azure99/blossom-wizard-v3", "dataset:Azure99/blossom-orca-v3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2024-04-20T06:28:48+00:00
[]
[ "zh", "en" ]
TAGS #transformers #safetensors #llama #text-generation #conversational #zh #en #dataset-Azure99/blossom-chat-v3 #dataset-Azure99/blossom-math-v4 #dataset-Azure99/blossom-wizard-v3 #dataset-Azure99/blossom-orca-v3 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# BLOSSOM-v5-llama3-8b Github • Blossom Chat Demo ### What's new? The Blossom V5 series models is fully trained using high-quality data distilled from gpt-4-0125-preview, resulting in significant improvements. ### Introduction Blossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Meta-Llama-3-8B pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source. Training was conducted in two stages. The first stage used 40K Wizard, 40K Orca, 10K Math single-turn instruction datasets, training for 1 epoch; the second stage used 10K Blossom chat multi-turn dialogue dataset, and 10% randomly sampled data from the first stage, training for 3 epochs. ### Inference Inference is performed in the form of dialogue continuation. Single-turn dialogue Multi-turn dialogue Note: At the end of the Bot's output in the historical conversation, append a '<|end_of_text|>'.
[ "# BLOSSOM-v5-llama3-8b\n\nGithub • Blossom Chat Demo", "### What's new?\n\nThe Blossom V5 series models is fully trained using high-quality data distilled from gpt-4-0125-preview, resulting in significant improvements.", "### Introduction\n\nBlossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Meta-Llama-3-8B pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source.\n\nTraining was conducted in two stages. The first stage used 40K Wizard, 40K Orca, 10K Math single-turn instruction datasets, training for 1 epoch; the second stage used 10K Blossom chat multi-turn dialogue dataset, and 10% randomly sampled data from the first stage, training for 3 epochs.", "### Inference\n\nInference is performed in the form of dialogue continuation.\n\nSingle-turn dialogue\n\n\n\nMulti-turn dialogue\n\n\n\nNote: At the end of the Bot's output in the historical conversation, append a '<|end_of_text|>'." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #zh #en #dataset-Azure99/blossom-chat-v3 #dataset-Azure99/blossom-math-v4 #dataset-Azure99/blossom-wizard-v3 #dataset-Azure99/blossom-orca-v3 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# BLOSSOM-v5-llama3-8b\n\nGithub • Blossom Chat Demo", "### What's new?\n\nThe Blossom V5 series models is fully trained using high-quality data distilled from gpt-4-0125-preview, resulting in significant improvements.", "### Introduction\n\nBlossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Meta-Llama-3-8B pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source.\n\nTraining was conducted in two stages. The first stage used 40K Wizard, 40K Orca, 10K Math single-turn instruction datasets, training for 1 epoch; the second stage used 10K Blossom chat multi-turn dialogue dataset, and 10% randomly sampled data from the first stage, training for 3 epochs.", "### Inference\n\nInference is performed in the form of dialogue continuation.\n\nSingle-turn dialogue\n\n\n\nMulti-turn dialogue\n\n\n\nNote: At the end of the Bot's output in the historical conversation, append a '<|end_of_text|>'." ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details This model is a finetuned version of [unsloth/llama-3-8b Model](https://huggingface.co/unsloth/llama-3-8b) on the dataset [Psych8k](https://huggingface.co/datasets/EmoCareAI/Psych8k). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "unsloth/llama-3-8b-bnb-4bit"}
PrahmodhRaj/Llama-3_Psychiatrist_Chat
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-bnb-4bit", "region:us" ]
null
2024-04-20T06:32:35+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-unsloth/llama-3-8b-bnb-4bit #region-us
# Model Card for Model ID ## Model Details This model is a finetuned version of unsloth/llama-3-8b Model on the dataset Psych8k. ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details\n\nThis model is a finetuned version of unsloth/llama-3-8b Model on the dataset Psych8k.", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-unsloth/llama-3-8b-bnb-4bit #region-us \n", "# Model Card for Model ID", "## Model Details\n\nThis model is a finetuned version of unsloth/llama-3-8b Model on the dataset Psych8k.", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0_ablation_declr_5iters5e7_iter_1 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.0_ablation_declr_5iters5e7_iter_1", "results": []}]}
ZhangShenao/0.0_ablation_declr_5iters5e7_iter_1
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T06:32:41+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.0_ablation_declr_5iters5e7_iter_1 This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.0_ablation_declr_5iters5e7_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.0_ablation_declr_5iters5e7_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
text-generation
transformers
# LLama3-Gaja-Hindi-8B-v0.1 ## Overview LLama3-Gaja-Hindi-8B-v0.1 is an extension of the Ambari series, a bilingual English/Hindi model developed and released by [Cognitivelab.in](https://www.cognitivelab.in/). This model is specialized for natural language understanding tasks, particularly in the context of instructional pairs. It is built upon the [Llama3 8b](https://huggingface.co/meta-llama/Meta-Llama-3-8B) model, utilizing a fine-tuning process with a curated dataset of translated instructional pairs. <img src="https://cdn-uploads.huggingface.co/production/uploads/6442d975ad54813badc1ddf7/G0u9L6RQJFinST0chQmfL.jpeg" width="500px"> ## Generate ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import GenerationConfig, TextStreamer , TextIteratorStreamer model = AutoModelForCausalLM.from_pretrained("Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1", torch_dtype=torch.bfloat16).to("cuda") tokenizer = AutoTokenizer.from_pretrained("Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1", trust_remote_code=True) # Existing messages list messages = [ {"role": "system", "content": " You are Gaja, an AI assistant created by Cognitivelab and trained on top of Llama 3 Large language model (LLM), proficient in English and Hindi. You can respond in both languages based on the user's request."}, {"role": "user", "content": "Who are you"} ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, # tokenize=False, return_tensors="pt" ).to("cuda") outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=tokenizer.convert_tokens_to_ids("<|eot_id|>"), do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ## Multi-turn Chat To use the Ambari-7B-Instruct-v0.1 model, you can follow the example code below: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import GenerationConfig, TextStreamer , TextIteratorStreamer model = AutoModelForCausalLM.from_pretrained("Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1", torch_dtype=torch.bfloat16).to("cuda") tokenizer = AutoTokenizer.from_pretrained("Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1", trust_remote_code=True) # Existing messages list messages = [ {"role": "system", "content": " You are Gaja, an AI assistant created by Cognitivelab and trained on top of Llama 3 Large language model (LLM), proficient in English and Hindi. You can respond in both languages based on the user's request."}, ] # Function to add user input and generate response def process_user_input(user_input): global messages # Add user's input to messages list messages.append({"role": "user", "content": user_input}) # Prepare the prompt for generation prompt_formatted_message = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=False ) # Configure generation parameters generation_config = GenerationConfig( repetition_penalty=1.2, max_new_tokens=8000, temperature=0.2, top_p=0.95, top_k=40, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.convert_tokens_to_ids("<|eot_id|>"), pad_token_id=tokenizer.pad_token_id, do_sample=True, use_cache=True, return_dict_in_generate=True, output_attentions=False, output_hidden_states=False, output_scores=False, ) streamer = TextStreamer(tokenizer) batch = tokenizer(str(prompt_formatted_message.strip()), return_tensors="pt") print("\033[32mResponse: \033[0m") # Print an empty response # Generate response generated = model.generate( inputs=batch["input_ids"].to("cuda"), generation_config=generation_config, streamer=streamer, ) # Extract and format assistant's response # print(tokenizer.decode(generated["sequences"].cpu().tolist()[0])) assistant_response = tokenizer.decode(generated["sequences"].cpu().tolist()[0]) # Find the last occurrence of "assistant" and empty string ("") assistant_start_index = assistant_response.rfind("<|start_header_id|>assistant<|end_header_id|>") empty_string_index = assistant_response.rfind("<|eot_id|>") # Extract the text between the last "assistant" and "" if assistant_start_index != -1 and empty_string_index != -1: final_response = assistant_response[assistant_start_index + len("<|start_header_id|>assistant<|end_header_id|>") : empty_string_index] else: # final_response = assistant_response # If indices not found, use the whole response assert "Filed to generate multi turn prompt formate" # Append the extracted response to the messages list messages.append({"role": "assistant", "content": final_response}) # messages.append({"role": "assistant", "content": assistant_response}) # Print assistant's response # print(f"Assistant: {assistant_response}") # Main interaction loop while True: print("=================================================================================") user_input = input("Input: ") # Prompt user for input # Check if user_input is empty if not user_input.strip(): # .strip() removes any leading or trailing whitespace break # Break out of the loop if input is empty # Print response placeholder process_user_input(user_input) # Process user's input and generate response ``` ## Prompt formate system prompt = `You are Gaja, an AI assistant created by Cognitivelab and trained on top of Llama 3 Large language model(LLM), proficient in English and Hindi. You can respond in both languages based on the users request.` ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Benchmarks coming soon ## Bilingual Instruct Fine-tuning The model underwent a pivotal stage of supervised fine-tuning with low-rank adaptation, focusing on bilingual instruct fine-tuning. This approach involved training the model to respond adeptly in either English or Hindi based on the language specified in the user prompt or instruction. ## References - [Ambari-7B-Instruct Model](https://huggingface.co/Cognitive-Lab/Ambari-7B-Instruct-v0.1)
{"language": ["hi", "en"], "license": "llama2", "library_name": "transformers", "tags": ["hindi", "bilingual"]}
Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1
null
[ "transformers", "safetensors", "llama", "text-generation", "hindi", "bilingual", "conversational", "hi", "en", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T06:32:49+00:00
[]
[ "hi", "en" ]
TAGS #transformers #safetensors #llama #text-generation #hindi #bilingual #conversational #hi #en #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# LLama3-Gaja-Hindi-8B-v0.1 ## Overview LLama3-Gaja-Hindi-8B-v0.1 is an extension of the Ambari series, a bilingual English/Hindi model developed and released by URL. This model is specialized for natural language understanding tasks, particularly in the context of instructional pairs. It is built upon the Llama3 8b model, utilizing a fine-tuning process with a curated dataset of translated instructional pairs. <img src="URL width="500px"> ## Generate ## Multi-turn Chat To use the Ambari-7B-Instruct-v0.1 model, you can follow the example code below: ## Prompt formate system prompt = 'You are Gaja, an AI assistant created by Cognitivelab and trained on top of Llama 3 Large language model(LLM), proficient in English and Hindi. You can respond in both languages based on the users request.' ## Benchmarks coming soon ## Bilingual Instruct Fine-tuning The model underwent a pivotal stage of supervised fine-tuning with low-rank adaptation, focusing on bilingual instruct fine-tuning. This approach involved training the model to respond adeptly in either English or Hindi based on the language specified in the user prompt or instruction. ## References - Ambari-7B-Instruct Model
[ "# LLama3-Gaja-Hindi-8B-v0.1", "## Overview\n\nLLama3-Gaja-Hindi-8B-v0.1 is an extension of the Ambari series, a bilingual English/Hindi model developed and released by URL. This model is specialized for natural language understanding tasks, particularly in the context of instructional pairs. It is built upon the Llama3 8b model, utilizing a fine-tuning process with a curated dataset of translated instructional pairs.\n\n<img src=\"URL width=\"500px\">", "## Generate", "## Multi-turn Chat\n\nTo use the Ambari-7B-Instruct-v0.1 model, you can follow the example code below:", "## Prompt formate\n\nsystem prompt = 'You are Gaja, an AI assistant created by Cognitivelab and trained on top of Llama 3 Large language model(LLM), proficient in English and Hindi. You can respond in both languages based on the users request.'", "## Benchmarks \ncoming soon", "## Bilingual Instruct Fine-tuning\n\nThe model underwent a pivotal stage of supervised fine-tuning with low-rank adaptation, focusing on bilingual instruct fine-tuning. This approach involved training the model to respond adeptly in either English or Hindi based on the language specified in the user prompt or instruction.", "## References\n\n- Ambari-7B-Instruct Model" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #hindi #bilingual #conversational #hi #en #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# LLama3-Gaja-Hindi-8B-v0.1", "## Overview\n\nLLama3-Gaja-Hindi-8B-v0.1 is an extension of the Ambari series, a bilingual English/Hindi model developed and released by URL. This model is specialized for natural language understanding tasks, particularly in the context of instructional pairs. It is built upon the Llama3 8b model, utilizing a fine-tuning process with a curated dataset of translated instructional pairs.\n\n<img src=\"URL width=\"500px\">", "## Generate", "## Multi-turn Chat\n\nTo use the Ambari-7B-Instruct-v0.1 model, you can follow the example code below:", "## Prompt formate\n\nsystem prompt = 'You are Gaja, an AI assistant created by Cognitivelab and trained on top of Llama 3 Large language model(LLM), proficient in English and Hindi. You can respond in both languages based on the users request.'", "## Benchmarks \ncoming soon", "## Bilingual Instruct Fine-tuning\n\nThe model underwent a pivotal stage of supervised fine-tuning with low-rank adaptation, focusing on bilingual instruct fine-tuning. This approach involved training the model to respond adeptly in either English or Hindi based on the language specified in the user prompt or instruction.", "## References\n\n- Ambari-7B-Instruct Model" ]
text-to-image
diffusers
# Hyper-SD Official Repository of the paper: *[Hyper-SD](https://arxiv.org/abs/2404.13686)*. Project Page: https://hyper-sd.github.io/ ![](./hypersd_tearser.jpg) ## News🔥🔥🔥 * Apr.30, 2024. 💥💥💥 Our **8-Steps CFG-Preserved** [Hyper-SDXL-8steps-CFG-LoRA](https://huggingface.co/ByteDance/Hyper-SD/blob/main/Hyper-SDXL-8steps-CFG-lora.safetensors) and [Hyper-SD15-8steps-CFG-LoRA](https://huggingface.co/ByteDance/Hyper-SD/blob/main/Hyper-SD15-8steps-CFG-lora.safetensors) is available now(support 5~8 guidance scales), we strongly recommend making the 8-step CFGLora a standard configuration for all SDXL and SD15 models!!! (the 4-steps version will be coming soon)💥💥💥 * Apr.28, 2024. ComfyUI workflows on 1-Step Unified LoRA 🥰 with TCDScheduler to inference on different steps are [released](https://huggingface.co/ByteDance/Hyper-SD/tree/main/comfyui)! Remember to install ⭕️ [ComfyUI-TCD](https://github.com/JettHu/ComfyUI-TCD) in your `ComfyUI/custom_nodes` folder!!! You're encouraged to adjust the eta parameter to get better results 🌟! * Apr.26, 2024. 💥💥💥 Our CFG-Preserved Hyper-SD15/SDXL that facilitate negative prompts and larger guidance scales (e.g. 5~8) will be coming soon!!! 💥💥💥 * Apr.26, 2024. Thanks to @[Pete](https://huggingface.co/pngwn) for contributing to our [scribble demo](https://huggingface.co/spaces/ByteDance/Hyper-SD15-Scribble) with larger canvas right now 👏. * Apr.24, 2024. The ComfyUI [workflow](https://huggingface.co/ByteDance/Hyper-SD/blob/main/comfyui/Hyper-SDXL-1step-Unet-workflow.json) and [checkpoint](https://huggingface.co/ByteDance/Hyper-SD/blob/main/Hyper-SDXL-1step-Unet-Comfyui.fp16.safetensors) on 1-Step SDXL UNet ✨ is also available! Don't forget ⭕️ to install the custom [scheduler](https://huggingface.co/ByteDance/Hyper-SD/tree/main/comfyui/ComfyUI-HyperSDXL1StepUnetScheduler) in your `ComfyUI/custom_nodes` folder!!! * Apr.23, 2024. ComfyUI workflows on N-Steps LoRAs are [released](https://huggingface.co/ByteDance/Hyper-SD/tree/main/comfyui)! Worth a try for creators 💥! * Apr.23, 2024. Our technical report 📚 is uploaded to [arXiv](https://arxiv.org/abs/2404.13686)! Many implementation details are provided and we welcome more discussions👏. * Apr.21, 2024. Hyper-SD ⚡️ is highly compatible and work well with different base models and controlnets. To clarify, we also append the usage example of controlnet [here](https://huggingface.co/ByteDance/Hyper-SD#controlnet-usage). * Apr.20, 2024. Our checkpoints and two demos 🤗 (i.e. [SD15-Scribble](https://huggingface.co/spaces/ByteDance/Hyper-SD15-Scribble) and [SDXL-T2I](https://huggingface.co/spaces/ByteDance/Hyper-SDXL-1Step-T2I)) are publicly available on [HuggingFace Repo](https://huggingface.co/ByteDance/Hyper-SD). ## Try our Hugging Face demos: Hyper-SD Scribble demo host on [🤗 scribble](https://huggingface.co/spaces/ByteDance/Hyper-SD15-Scribble) Hyper-SDXL One-step Text-to-Image demo host on [🤗 T2I](https://huggingface.co/spaces/ByteDance/Hyper-SDXL-1Step-T2I) ## Introduction Hyper-SD is one of the new State-of-the-Art diffusion model acceleration techniques. In this repository, we release the models distilled from [SDXL Base 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) and [Stable-Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)。 ## Checkpoints * `Hyper-SDXL-Nstep-lora.safetensors`: Lora checkpoint, for SDXL-related models. * `Hyper-SD15-Nstep-lora.safetensors`: Lora checkpoint, for SD1.5-related models. * `Hyper-SDXL-1step-unet.safetensors`: Unet checkpoint distilled from SDXL-Base. ## Text-to-Image Usage ### SDXL-related models #### 2-Steps, 4-Steps, 8-steps LoRA Take the 2-steps LoRA as an example, you can also use other LoRAs for the corresponding inference steps setting. ```python import torch from diffusers import DiffusionPipeline, DDIMScheduler from huggingface_hub import hf_hub_download base_model_id = "stabilityai/stable-diffusion-xl-base-1.0" repo_name = "ByteDance/Hyper-SD" # Take 2-steps lora as an example ckpt_name = "Hyper-SDXL-2steps-lora.safetensors" # Load model. pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to("cuda") pipe.load_lora_weights(hf_hub_download(repo_name, ckpt_name)) pipe.fuse_lora() # Ensure ddim scheduler timestep spacing set as trailing !!! pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") # lower eta results in more detail prompt="a photo of a cat" image=pipe(prompt=prompt, num_inference_steps=2, guidance_scale=0).images[0] ``` #### Unified LoRA (support 1 to 8 steps inference) You can flexibly adjust the number of inference steps and eta value to achieve best performance. ```python import torch from diffusers import DiffusionPipeline, TCDScheduler from huggingface_hub import hf_hub_download base_model_id = "stabilityai/stable-diffusion-xl-base-1.0" repo_name = "ByteDance/Hyper-SD" ckpt_name = "Hyper-SDXL-1step-lora.safetensors" # Load model. pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to("cuda") pipe.load_lora_weights(hf_hub_download(repo_name, ckpt_name)) pipe.fuse_lora() # Use TCD scheduler to achieve better image quality pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) # Lower eta results in more detail for multi-steps inference eta=1.0 prompt="a photo of a cat" image=pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0, eta=eta).images[0] ``` #### 1-step SDXL Unet Only for the single step inference. ```python import torch from diffusers import DiffusionPipeline, UNet2DConditionModel, LCMScheduler from huggingface_hub import hf_hub_download from safetensors.torch import load_file base_model_id = "stabilityai/stable-diffusion-xl-base-1.0" repo_name = "ByteDance/Hyper-SD" ckpt_name = "Hyper-SDXL-1step-Unet.safetensors" # Load model. unet = UNet2DConditionModel.from_config(base_model_id, subfolder="unet").to("cuda", torch.float16) unet.load_state_dict(load_file(hf_hub_download(repo_name, ckpt_name), device="cuda")) pipe = DiffusionPipeline.from_pretrained(base_model_id, unet=unet, torch_dtype=torch.float16, variant="fp16").to("cuda") # Use LCM scheduler instead of ddim scheduler to support specific timestep number inputs pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) # Set start timesteps to 800 in the one-step inference to get better results prompt="a photo of a cat" image=pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0, timesteps=[800]).images[0] ``` ### SD1.5-related models #### 2-Steps, 4-Steps, 8-steps LoRA Take the 2-steps LoRA as an example, you can also use other LoRAs for the corresponding inference steps setting. ```python import torch from diffusers import DiffusionPipeline, DDIMScheduler from huggingface_hub import hf_hub_download base_model_id = "runwayml/stable-diffusion-v1-5" repo_name = "ByteDance/Hyper-SD" # Take 2-steps lora as an example ckpt_name = "Hyper-SD15-2steps-lora.safetensors" # Load model. pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to("cuda") pipe.load_lora_weights(hf_hub_download(repo_name, ckpt_name)) pipe.fuse_lora() # Ensure ddim scheduler timestep spacing set as trailing !!! pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") prompt="a photo of a cat" image=pipe(prompt=prompt, num_inference_steps=2, guidance_scale=0).images[0] ``` #### Unified LoRA (support 1 to 8 steps inference) You can flexibly adjust the number of inference steps and eta value to achieve best performance. ```python import torch from diffusers import DiffusionPipeline, TCDScheduler from huggingface_hub import hf_hub_download base_model_id = "runwayml/stable-diffusion-v1-5" repo_name = "ByteDance/Hyper-SD" ckpt_name = "Hyper-SD15-1step-lora.safetensors" # Load model. pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to("cuda") pipe.load_lora_weights(hf_hub_download(repo_name, ckpt_name)) pipe.fuse_lora() # Use TCD scheduler to achieve better image quality pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) # Lower eta results in more detail for multi-steps inference eta=1.0 prompt="a photo of a cat" image=pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0, eta=eta).images[0] ``` ## ControlNet Usage ### SDXL-related models #### 2-Steps, 4-Steps, 8-steps LoRA Take Canny Controlnet and 2-steps inference as an example: ```python import torch from diffusers.utils import load_image import numpy as np import cv2 from PIL import Image from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL, DDIMScheduler from huggingface_hub import hf_hub_download # Load original image image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png") image = np.array(image) # Prepare Canny Control Image low_threshold = 100 high_threshold = 200 image = cv2.Canny(image, low_threshold, high_threshold) image = image[:, :, None] image = np.concatenate([image, image, image], axis=2) control_image = Image.fromarray(image) control_image.save("control.png") control_weight = 0.5 # recommended for good generalization # Initialize pipeline controlnet = ControlNetModel.from_pretrained( "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 ) vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipe = StableDiffusionXLControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16).to("cuda") pipe.load_lora_weights(hf_hub_download("ByteDance/Hyper-SD", "Hyper-SDXL-2steps-lora.safetensors")) # Ensure ddim scheduler timestep spacing set as trailing !!! pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") pipe.fuse_lora() image = pipe("A chocolate cookie", num_inference_steps=2, image=control_image, guidance_scale=0, controlnet_conditioning_scale=control_weight).images[0] image.save('image_out.png') ``` #### Unified LoRA (support 1 to 8 steps inference) Take Canny Controlnet as an example: ```python import torch from diffusers.utils import load_image import numpy as np import cv2 from PIL import Image from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL, TCDScheduler from huggingface_hub import hf_hub_download # Load original image image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png") image = np.array(image) # Prepare Canny Control Image low_threshold = 100 high_threshold = 200 image = cv2.Canny(image, low_threshold, high_threshold) image = image[:, :, None] image = np.concatenate([image, image, image], axis=2) control_image = Image.fromarray(image) control_image.save("control.png") control_weight = 0.5 # recommended for good generalization # Initialize pipeline controlnet = ControlNetModel.from_pretrained( "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 ) vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipe = StableDiffusionXLControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16).to("cuda") # Load Hyper-SD15-1step lora pipe.load_lora_weights(hf_hub_download("ByteDance/Hyper-SD", "Hyper-SDXL-1step-lora.safetensors")) pipe.fuse_lora() # Use TCD scheduler to achieve better image quality pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) # Lower eta results in more detail for multi-steps inference eta=1.0 image = pipe("A chocolate cookie", num_inference_steps=4, image=control_image, guidance_scale=0, controlnet_conditioning_scale=control_weight, eta=eta).images[0] image.save('image_out.png') ``` ### SD1.5-related models #### 2-Steps, 4-Steps, 8-steps LoRA Take Canny Controlnet and 2-steps inference as an example: ```python import torch from diffusers.utils import load_image import numpy as np import cv2 from PIL import Image from diffusers import ControlNetModel, StableDiffusionControlNetPipeline, DDIMScheduler from huggingface_hub import hf_hub_download controlnet_checkpoint = "lllyasviel/control_v11p_sd15_canny" # Load original image image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/input.png") image = np.array(image) # Prepare Canny Control Image low_threshold = 100 high_threshold = 200 image = cv2.Canny(image, low_threshold, high_threshold) image = image[:, :, None] image = np.concatenate([image, image, image], axis=2) control_image = Image.fromarray(image) control_image.save("control.png") # Initialize pipeline controlnet = ControlNetModel.from_pretrained(controlnet_checkpoint, torch_dtype=torch.float16) pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16).to("cuda") pipe.load_lora_weights(hf_hub_download("ByteDance/Hyper-SD", "Hyper-SD15-2steps-lora.safetensors")) pipe.fuse_lora() # Ensure ddim scheduler timestep spacing set as trailing !!! pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") image = pipe("a blue paradise bird in the jungle", num_inference_steps=2, image=control_image, guidance_scale=0).images[0] image.save('image_out.png') ``` #### Unified LoRA (support 1 to 8 steps inference) Take Canny Controlnet as an example: ```python import torch from diffusers.utils import load_image import numpy as np import cv2 from PIL import Image from diffusers import ControlNetModel, StableDiffusionControlNetPipeline, TCDScheduler from huggingface_hub import hf_hub_download controlnet_checkpoint = "lllyasviel/control_v11p_sd15_canny" # Load original image image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/input.png") image = np.array(image) # Prepare Canny Control Image low_threshold = 100 high_threshold = 200 image = cv2.Canny(image, low_threshold, high_threshold) image = image[:, :, None] image = np.concatenate([image, image, image], axis=2) control_image = Image.fromarray(image) control_image.save("control.png") # Initialize pipeline controlnet = ControlNetModel.from_pretrained(controlnet_checkpoint, torch_dtype=torch.float16) pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16).to("cuda") # Load Hyper-SD15-1step lora pipe.load_lora_weights(hf_hub_download("ByteDance/Hyper-SD", "Hyper-SD15-1step-lora.safetensors")) pipe.fuse_lora() # Use TCD scheduler to achieve better image quality pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) # Lower eta results in more detail for multi-steps inference eta=1.0 image = pipe("a blue paradise bird in the jungle", num_inference_steps=1, image=control_image, guidance_scale=0, eta=eta).images[0] image.save('image_out.png') ``` ## Comfyui Usage * `Hyper-SDXL-Nsteps-lora.safetensors`: [text-to-image workflow](https://huggingface.co/ByteDance/Hyper-SD/blob/main/comfyui/Hyper-SDXL-Nsteps-lora-workflow.json) * `Hyper-SD15-Nsteps-lora.safetensors`: [text-to-image workflow](https://huggingface.co/ByteDance/Hyper-SD/blob/main/comfyui/Hyper-SD15-Nsteps-lora-workflow.json) * `Hyper-SDXL-1step-Unet-Comfyui.fp16.safetensors`: [text-to-image workflow](https://huggingface.co/ByteDance/Hyper-SD/blob/main/comfyui/Hyper-SDXL-1step-Unet-workflow.json) * **REQUIREMENT / INSTALL** for 1-Step SDXL UNet: Please install our [scheduler folder](https://huggingface.co/ByteDance/Hyper-SD/tree/main/comfyui/ComfyUI-HyperSDXL1StepUnetScheduler) into your `ComfyUI/custom_nodes` to enable sampling from 800 timestep instead of 999. * i.e. making sure the `ComfyUI/custom_nodes/ComfyUI-HyperSDXL1StepUnetScheduler` folder exist. * For more details, please refer to our [technical report](https://arxiv.org/abs/2404.13686). * `Hyper-SD15-1step-lora.safetensors`: [text-to-image workflow](https://huggingface.co/ByteDance/Hyper-SD/blob/main/comfyui/Hyper-SD15-1step-unified-lora-workflow.json) * `Hyper-SDXL-1step-lora.safetensors`: [text-to-image workflow](https://huggingface.co/ByteDance/Hyper-SD/blob/main/comfyui/Hyper-SDXL-1step-unified-lora-workflow.json) * **REQUIREMENT / INSTALL** for 1-Step Unified LoRAs: Please install the [ComfyUI-TCD](https://github.com/JettHu/ComfyUI-TCD) into your `ComfyUI/custom_nodes` to enable TCDScheduler with support of different inference steps (1~8) using single checkpoint. * i.e. making sure the `ComfyUI/custom_nodes/ComfyUI-TCD` folder exist. * You're encouraged to adjust the eta parameter in TCDScheduler to get better results. ## Citation ```bibtex @misc{ren2024hypersd, title={Hyper-SD: Trajectory Segmented Consistency Model for Efficient Image Synthesis}, author={Yuxi Ren and Xin Xia and Yanzuo Lu and Jiacheng Zhang and Jie Wu and Pan Xie and Xing Wang and Xuefeng Xiao}, year={2024}, eprint={2404.13686}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
{"license": "openrail++", "library_name": "diffusers", "tags": ["lora", "text-to-image", "stable-diffusion"], "inference": false}
ByteDance/Hyper-SD
null
[ "diffusers", "lora", "text-to-image", "stable-diffusion", "arxiv:2404.13686", "license:openrail++", "has_space", "region:us" ]
null
2024-04-20T06:34:54+00:00
[ "2404.13686" ]
[]
TAGS #diffusers #lora #text-to-image #stable-diffusion #arxiv-2404.13686 #license-openrail++ #has_space #region-us
# Hyper-SD Official Repository of the paper: *Hyper-SD*. Project Page: URL ![](./hypersd_tearser.jpg) ## News * Apr.30, 2024. Our 8-Steps CFG-Preserved Hyper-SDXL-8steps-CFG-LoRA and Hyper-SD15-8steps-CFG-LoRA is available now(support 5~8 guidance scales), we strongly recommend making the 8-step CFGLora a standard configuration for all SDXL and SD15 models!!! (the 4-steps version will be coming soon) * Apr.28, 2024. ComfyUI workflows on 1-Step Unified LoRA with TCDScheduler to inference on different steps are released! Remember to install ⭕️ ComfyUI-TCD in your 'ComfyUI/custom_nodes' folder!!! You're encouraged to adjust the eta parameter to get better results ! * Apr.26, 2024. Our CFG-Preserved Hyper-SD15/SDXL that facilitate negative prompts and larger guidance scales (e.g. 5~8) will be coming soon!!! * Apr.26, 2024. Thanks to @Pete for contributing to our scribble demo with larger canvas right now . * Apr.24, 2024. The ComfyUI workflow and checkpoint on 1-Step SDXL UNet is also available! Don't forget ⭕️ to install the custom scheduler in your 'ComfyUI/custom_nodes' folder!!! * Apr.23, 2024. ComfyUI workflows on N-Steps LoRAs are released! Worth a try for creators ! * Apr.23, 2024. Our technical report is uploaded to arXiv! Many implementation details are provided and we welcome more discussions. * Apr.21, 2024. Hyper-SD ️ is highly compatible and work well with different base models and controlnets. To clarify, we also append the usage example of controlnet here. * Apr.20, 2024. Our checkpoints and two demos (i.e. SD15-Scribble and SDXL-T2I) are publicly available on HuggingFace Repo. ## Try our Hugging Face demos: Hyper-SD Scribble demo host on scribble Hyper-SDXL One-step Text-to-Image demo host on T2I ## Introduction Hyper-SD is one of the new State-of-the-Art diffusion model acceleration techniques. In this repository, we release the models distilled from SDXL Base 1.0 and Stable-Diffusion v1-5。 ## Checkpoints * 'Hyper-SDXL-Nstep-lora.safetensors': Lora checkpoint, for SDXL-related models. * 'Hyper-SD15-Nstep-lora.safetensors': Lora checkpoint, for SD1.5-related models. * 'Hyper-SDXL-1step-unet.safetensors': Unet checkpoint distilled from SDXL-Base. ## Text-to-Image Usage ### SDXL-related models #### 2-Steps, 4-Steps, 8-steps LoRA Take the 2-steps LoRA as an example, you can also use other LoRAs for the corresponding inference steps setting. #### Unified LoRA (support 1 to 8 steps inference) You can flexibly adjust the number of inference steps and eta value to achieve best performance. #### 1-step SDXL Unet Only for the single step inference. ### SD1.5-related models #### 2-Steps, 4-Steps, 8-steps LoRA Take the 2-steps LoRA as an example, you can also use other LoRAs for the corresponding inference steps setting. #### Unified LoRA (support 1 to 8 steps inference) You can flexibly adjust the number of inference steps and eta value to achieve best performance. ## ControlNet Usage ### SDXL-related models #### 2-Steps, 4-Steps, 8-steps LoRA Take Canny Controlnet and 2-steps inference as an example: #### Unified LoRA (support 1 to 8 steps inference) Take Canny Controlnet as an example: ### SD1.5-related models #### 2-Steps, 4-Steps, 8-steps LoRA Take Canny Controlnet and 2-steps inference as an example: #### Unified LoRA (support 1 to 8 steps inference) Take Canny Controlnet as an example: ## Comfyui Usage * 'Hyper-SDXL-Nsteps-lora.safetensors': text-to-image workflow * 'Hyper-SD15-Nsteps-lora.safetensors': text-to-image workflow * 'Hyper-SDXL-1step-Unet-Comfyui.fp16.safetensors': text-to-image workflow * REQUIREMENT / INSTALL for 1-Step SDXL UNet: Please install our scheduler folder into your 'ComfyUI/custom_nodes' to enable sampling from 800 timestep instead of 999. * i.e. making sure the 'ComfyUI/custom_nodes/ComfyUI-HyperSDXL1StepUnetScheduler' folder exist. * For more details, please refer to our technical report. * 'Hyper-SD15-1step-lora.safetensors': text-to-image workflow * 'Hyper-SDXL-1step-lora.safetensors': text-to-image workflow * REQUIREMENT / INSTALL for 1-Step Unified LoRAs: Please install the ComfyUI-TCD into your 'ComfyUI/custom_nodes' to enable TCDScheduler with support of different inference steps (1~8) using single checkpoint. * i.e. making sure the 'ComfyUI/custom_nodes/ComfyUI-TCD' folder exist. * You're encouraged to adjust the eta parameter in TCDScheduler to get better results.
[ "# Hyper-SD\nOfficial Repository of the paper: *Hyper-SD*.\n\nProject Page: URL\n\n![](./hypersd_tearser.jpg)", "## News\n\n* Apr.30, 2024. Our 8-Steps CFG-Preserved Hyper-SDXL-8steps-CFG-LoRA and Hyper-SD15-8steps-CFG-LoRA is available now(support 5~8 guidance scales), we strongly recommend making the 8-step CFGLora a standard configuration for all SDXL and SD15 models!!! (the 4-steps version will be coming soon)\n* Apr.28, 2024. ComfyUI workflows on 1-Step Unified LoRA with TCDScheduler to inference on different steps are released! Remember to install ⭕️ ComfyUI-TCD in your 'ComfyUI/custom_nodes' folder!!! You're encouraged to adjust the eta parameter to get better results !\n* Apr.26, 2024. Our CFG-Preserved Hyper-SD15/SDXL that facilitate negative prompts and larger guidance scales (e.g. 5~8) will be coming soon!!! \n* Apr.26, 2024. Thanks to @Pete for contributing to our scribble demo with larger canvas right now .\n* Apr.24, 2024. The ComfyUI workflow and checkpoint on 1-Step SDXL UNet is also available! Don't forget ⭕️ to install the custom scheduler in your 'ComfyUI/custom_nodes' folder!!!\n* Apr.23, 2024. ComfyUI workflows on N-Steps LoRAs are released! Worth a try for creators !\n* Apr.23, 2024. Our technical report is uploaded to arXiv! Many implementation details are provided and we welcome more discussions.\n* Apr.21, 2024. Hyper-SD ️ is highly compatible and work well with different base models and controlnets. To clarify, we also append the usage example of controlnet here.\n* Apr.20, 2024. Our checkpoints and two demos (i.e. SD15-Scribble and SDXL-T2I) are publicly available on HuggingFace Repo.", "## Try our Hugging Face demos: \nHyper-SD Scribble demo host on scribble \n\nHyper-SDXL One-step Text-to-Image demo host on T2I", "## Introduction\n\nHyper-SD is one of the new State-of-the-Art diffusion model acceleration techniques.\nIn this repository, we release the models distilled from SDXL Base 1.0 and Stable-Diffusion v1-5。", "## Checkpoints\n\n* 'Hyper-SDXL-Nstep-lora.safetensors': Lora checkpoint, for SDXL-related models.\n* 'Hyper-SD15-Nstep-lora.safetensors': Lora checkpoint, for SD1.5-related models.\n* 'Hyper-SDXL-1step-unet.safetensors': Unet checkpoint distilled from SDXL-Base.", "## Text-to-Image Usage", "### SDXL-related models", "#### 2-Steps, 4-Steps, 8-steps LoRA\nTake the 2-steps LoRA as an example, you can also use other LoRAs for the corresponding inference steps setting.", "#### Unified LoRA (support 1 to 8 steps inference)\nYou can flexibly adjust the number of inference steps and eta value to achieve best performance.", "#### 1-step SDXL Unet\nOnly for the single step inference.", "### SD1.5-related models", "#### 2-Steps, 4-Steps, 8-steps LoRA\nTake the 2-steps LoRA as an example, you can also use other LoRAs for the corresponding inference steps setting.", "#### Unified LoRA (support 1 to 8 steps inference)\nYou can flexibly adjust the number of inference steps and eta value to achieve best performance.", "## ControlNet Usage", "### SDXL-related models", "#### 2-Steps, 4-Steps, 8-steps LoRA\nTake Canny Controlnet and 2-steps inference as an example:", "#### Unified LoRA (support 1 to 8 steps inference)\nTake Canny Controlnet as an example:", "### SD1.5-related models", "#### 2-Steps, 4-Steps, 8-steps LoRA\nTake Canny Controlnet and 2-steps inference as an example:", "#### Unified LoRA (support 1 to 8 steps inference)\nTake Canny Controlnet as an example:", "## Comfyui Usage\n* 'Hyper-SDXL-Nsteps-lora.safetensors': text-to-image workflow\n* 'Hyper-SD15-Nsteps-lora.safetensors': text-to-image workflow\n* 'Hyper-SDXL-1step-Unet-Comfyui.fp16.safetensors': text-to-image workflow\n * REQUIREMENT / INSTALL for 1-Step SDXL UNet: Please install our scheduler folder into your 'ComfyUI/custom_nodes' to enable sampling from 800 timestep instead of 999. \n * i.e. making sure the 'ComfyUI/custom_nodes/ComfyUI-HyperSDXL1StepUnetScheduler' folder exist.\n * For more details, please refer to our technical report.\n* 'Hyper-SD15-1step-lora.safetensors': text-to-image workflow\n* 'Hyper-SDXL-1step-lora.safetensors': text-to-image workflow\n * REQUIREMENT / INSTALL for 1-Step Unified LoRAs: Please install the ComfyUI-TCD into your 'ComfyUI/custom_nodes' to enable TCDScheduler with support of different inference steps (1~8) using single checkpoint.\n * i.e. making sure the 'ComfyUI/custom_nodes/ComfyUI-TCD' folder exist.\n * You're encouraged to adjust the eta parameter in TCDScheduler to get better results." ]
[ "TAGS\n#diffusers #lora #text-to-image #stable-diffusion #arxiv-2404.13686 #license-openrail++ #has_space #region-us \n", "# Hyper-SD\nOfficial Repository of the paper: *Hyper-SD*.\n\nProject Page: URL\n\n![](./hypersd_tearser.jpg)", "## News\n\n* Apr.30, 2024. Our 8-Steps CFG-Preserved Hyper-SDXL-8steps-CFG-LoRA and Hyper-SD15-8steps-CFG-LoRA is available now(support 5~8 guidance scales), we strongly recommend making the 8-step CFGLora a standard configuration for all SDXL and SD15 models!!! (the 4-steps version will be coming soon)\n* Apr.28, 2024. ComfyUI workflows on 1-Step Unified LoRA with TCDScheduler to inference on different steps are released! Remember to install ⭕️ ComfyUI-TCD in your 'ComfyUI/custom_nodes' folder!!! You're encouraged to adjust the eta parameter to get better results !\n* Apr.26, 2024. Our CFG-Preserved Hyper-SD15/SDXL that facilitate negative prompts and larger guidance scales (e.g. 5~8) will be coming soon!!! \n* Apr.26, 2024. Thanks to @Pete for contributing to our scribble demo with larger canvas right now .\n* Apr.24, 2024. The ComfyUI workflow and checkpoint on 1-Step SDXL UNet is also available! Don't forget ⭕️ to install the custom scheduler in your 'ComfyUI/custom_nodes' folder!!!\n* Apr.23, 2024. ComfyUI workflows on N-Steps LoRAs are released! Worth a try for creators !\n* Apr.23, 2024. Our technical report is uploaded to arXiv! Many implementation details are provided and we welcome more discussions.\n* Apr.21, 2024. Hyper-SD ️ is highly compatible and work well with different base models and controlnets. To clarify, we also append the usage example of controlnet here.\n* Apr.20, 2024. Our checkpoints and two demos (i.e. SD15-Scribble and SDXL-T2I) are publicly available on HuggingFace Repo.", "## Try our Hugging Face demos: \nHyper-SD Scribble demo host on scribble \n\nHyper-SDXL One-step Text-to-Image demo host on T2I", "## Introduction\n\nHyper-SD is one of the new State-of-the-Art diffusion model acceleration techniques.\nIn this repository, we release the models distilled from SDXL Base 1.0 and Stable-Diffusion v1-5。", "## Checkpoints\n\n* 'Hyper-SDXL-Nstep-lora.safetensors': Lora checkpoint, for SDXL-related models.\n* 'Hyper-SD15-Nstep-lora.safetensors': Lora checkpoint, for SD1.5-related models.\n* 'Hyper-SDXL-1step-unet.safetensors': Unet checkpoint distilled from SDXL-Base.", "## Text-to-Image Usage", "### SDXL-related models", "#### 2-Steps, 4-Steps, 8-steps LoRA\nTake the 2-steps LoRA as an example, you can also use other LoRAs for the corresponding inference steps setting.", "#### Unified LoRA (support 1 to 8 steps inference)\nYou can flexibly adjust the number of inference steps and eta value to achieve best performance.", "#### 1-step SDXL Unet\nOnly for the single step inference.", "### SD1.5-related models", "#### 2-Steps, 4-Steps, 8-steps LoRA\nTake the 2-steps LoRA as an example, you can also use other LoRAs for the corresponding inference steps setting.", "#### Unified LoRA (support 1 to 8 steps inference)\nYou can flexibly adjust the number of inference steps and eta value to achieve best performance.", "## ControlNet Usage", "### SDXL-related models", "#### 2-Steps, 4-Steps, 8-steps LoRA\nTake Canny Controlnet and 2-steps inference as an example:", "#### Unified LoRA (support 1 to 8 steps inference)\nTake Canny Controlnet as an example:", "### SD1.5-related models", "#### 2-Steps, 4-Steps, 8-steps LoRA\nTake Canny Controlnet and 2-steps inference as an example:", "#### Unified LoRA (support 1 to 8 steps inference)\nTake Canny Controlnet as an example:", "## Comfyui Usage\n* 'Hyper-SDXL-Nsteps-lora.safetensors': text-to-image workflow\n* 'Hyper-SD15-Nsteps-lora.safetensors': text-to-image workflow\n* 'Hyper-SDXL-1step-Unet-Comfyui.fp16.safetensors': text-to-image workflow\n * REQUIREMENT / INSTALL for 1-Step SDXL UNet: Please install our scheduler folder into your 'ComfyUI/custom_nodes' to enable sampling from 800 timestep instead of 999. \n * i.e. making sure the 'ComfyUI/custom_nodes/ComfyUI-HyperSDXL1StepUnetScheduler' folder exist.\n * For more details, please refer to our technical report.\n* 'Hyper-SD15-1step-lora.safetensors': text-to-image workflow\n* 'Hyper-SDXL-1step-lora.safetensors': text-to-image workflow\n * REQUIREMENT / INSTALL for 1-Step Unified LoRAs: Please install the ComfyUI-TCD into your 'ComfyUI/custom_nodes' to enable TCDScheduler with support of different inference steps (1~8) using single checkpoint.\n * i.e. making sure the 'ComfyUI/custom_nodes/ComfyUI-TCD' folder exist.\n * You're encouraged to adjust the eta parameter in TCDScheduler to get better results." ]
null
null
¿Qué es Diamed Pastillas? Diamed tabletas es un suplemento dietético meticulosamente formulado y elaborado para respaldar niveles saludables de azúcar en sangre. Aprovecha el poder de los ingredientes naturales, cuidadosamente seleccionados por su potencial para mejorar la sensibilidad a la insulina y promover el metabolismo de la glucosa. Diseñado para complementar una dieta y un estilo de vida equilibrados, Diamed Precio ofrece un enfoque holístico para el control de la diabetes sin la necesidad de medicamentos sintéticos. Página web oficial:<a href="https://www.nutritionsee.com/diaemmexi">www.Diamed.com</a> <p><a href="https://www.nutritionsee.com/diaemmexi"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/04/Diamed-Mexico.png" alt="enter image description here"> </a></p> <a href="https://www.nutritionsee.com/diaemmexi">¡¡Comprar ahora!! Haga clic en el enlace a continuación para obtener más información y obtener un 50 % de descuento ahora... ¡Date prisa!</a> Página web oficial:<a href="https://www.nutritionsee.com/diaemmexi">www.Diamed.com</a>
{"license": "apache-2.0"}
DiamedMexico/DiamedMexico
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-20T06:35:57+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
¿Qué es Diamed Pastillas? Diamed tabletas es un suplemento dietético meticulosamente formulado y elaborado para respaldar niveles saludables de azúcar en sangre. Aprovecha el poder de los ingredientes naturales, cuidadosamente seleccionados por su potencial para mejorar la sensibilidad a la insulina y promover el metabolismo de la glucosa. Diseñado para complementar una dieta y un estilo de vida equilibrados, Diamed Precio ofrece un enfoque holístico para el control de la diabetes sin la necesidad de medicamentos sintéticos. Página web oficial:<a href="URL <p><a href="URL <img src="URL alt="enter image description here"> </a></p> <a href="URL¡¡Comprar ahora!! Haga clic en el enlace a continuación para obtener más información y obtener un 50 % de descuento ahora... ¡Date prisa!</a> Página web oficial:<a href="URL
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
image-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
LeoNight/custom-resnet50d
null
[ "transformers", "safetensors", "myResnet", "image-classification", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
null
2024-04-20T06:37:03+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #myResnet #image-classification #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #myResnet #image-classification #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tom-brady/sn6_219
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T06:39:43+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tom-brady/sn6_231
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T06:39:43+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-samsum This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2023 - Bleu: 8.9056 - Gen Len: 21.9792 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 188 | 2.6051 | 5.5269 | 22.8339 | | No log | 2.0 | 376 | 2.2023 | 8.9056 | 21.9792 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["simplification", "generated_from_trainer"], "metrics": ["bleu"], "base_model": "facebook/mbart-large-50", "model-index": [{"name": "mbart-samsum", "results": []}]}
maviced/mbart-samsum
null
[ "transformers", "tensorboard", "safetensors", "mbart", "text2text-generation", "simplification", "generated_from_trainer", "base_model:facebook/mbart-large-50", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-20T06:40:10+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #mbart #text2text-generation #simplification #generated_from_trainer #base_model-facebook/mbart-large-50 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
mbart-samsum ============ This model is a fine-tuned version of facebook/mbart-large-50 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.2023 * Bleu: 8.9056 * Gen Len: 21.9792 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5.6e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #mbart #text2text-generation #simplification #generated_from_trainer #base_model-facebook/mbart-large-50 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tom-brady/sn6_232
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T06:40:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: float16 merge_method: passthrough slices: - sources: - layer_range: [0, 32] model: model: path: meta-llama/Meta-Llama-3-8B-Instruct - sources: - layer_range: [0, 32] model: model: path: meta-llama/Meta-Llama-3-8B-Instruct - sources: - layer_range: [0, 32] model: model: path: meta-llama/Meta-Llama-3-8B-Instruct - sources: - layer_range: [0, 32] model: model: path: meta-llama/Meta-Llama-3-8B-Instruct ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["meta-llama/Meta-Llama-3-8B-Instruct"]}
gotchu/llama3-4
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T06:40:53+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-meta-llama/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merged This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * meta-llama/Meta-Llama-3-8B-Instruct ### Configuration The following YAML configuration was used to produce this model:
[ "# merged\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B-Instruct", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-meta-llama/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merged\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B-Instruct", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/deepseek-llm-67b-chat-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-IQ1_S.gguf) | i1-IQ1_S | 14.8 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-IQ1_M.gguf) | i1-IQ1_M | 16.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.3 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.3 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-IQ2_S.gguf) | i1-IQ2_S | 21.4 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-IQ2_M.gguf) | i1-IQ2_M | 23.2 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-Q2_K.gguf) | i1-Q2_K | 25.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.0 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-Q3_K_S.gguf) | i1-Q3_K_S | 29.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-IQ3_S.gguf) | i1-IQ3_S | 29.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-IQ3_M.gguf) | i1-IQ3_M | 30.6 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-Q3_K_M.gguf) | i1-Q3_K_M | 32.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-Q3_K_L.gguf) | i1-Q3_K_L | 35.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.3 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-Q4_0.gguf) | i1-Q4_0 | 38.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-Q4_K_S.gguf) | i1-Q4_K_S | 38.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-Q4_K_M.gguf) | i1-Q4_K_M | 40.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-Q5_K_S.gguf) | i1-Q5_K_S | 46.6 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-Q5_K_M.gguf) | i1-Q5_K_M | 47.8 | | | [PART 1](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/deepseek-llm-67b-chat-i1-GGUF/resolve/main/deepseek-llm-67b-chat.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 55.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "base_model": "deepseek-ai/deepseek-llm-67b-chat", "license_link": "LICENSE", "license_name": "deepseek", "quantized_by": "mradermacher"}
mradermacher/deepseek-llm-67b-chat-i1-GGUF
null
[ "transformers", "gguf", "en", "base_model:deepseek-ai/deepseek-llm-67b-chat", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-20T06:42:46+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-deepseek-ai/deepseek-llm-67b-chat #license-other #endpoints_compatible #region-us
About ----- weighted/imatrix quants of URL static quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-deepseek-ai/deepseek-llm-67b-chat #license-other #endpoints_compatible #region-us \n" ]
text-generation
transformers
# solar_merge_test_1 ## 🧩 Configuration ```yaml base_model: beomi/OPEN-SOLAR-KO-10.7B dtype: float16 experts: - source_model: beomi/OPEN-SOLAR-KO-10.7B positive_prompts: ["당신은 친절한 보편적인 어시스턴트이다."] - source_model: hyeogi/SOLAR-10.7B-dpo-v1 positive_prompts: ["당신은 친절한 어시스턴트이다."] gate_mode: cheap_embed tokenizer_source: base ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "jieunhan/solar_merge_test_1" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "beomi/OPEN-SOLAR-KO-10.7B", "hyeogi/SOLAR-10.7B-dpo-v1"], "base_model": ["beomi/OPEN-SOLAR-KO-10.7B", "hyeogi/SOLAR-10.7B-dpo-v1"]}
jieunhan/solar_merge_test_1
null
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "beomi/OPEN-SOLAR-KO-10.7B", "hyeogi/SOLAR-10.7B-dpo-v1", "base_model:beomi/OPEN-SOLAR-KO-10.7B", "base_model:hyeogi/SOLAR-10.7B-dpo-v1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T06:46:13+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #beomi/OPEN-SOLAR-KO-10.7B #hyeogi/SOLAR-10.7B-dpo-v1 #base_model-beomi/OPEN-SOLAR-KO-10.7B #base_model-hyeogi/SOLAR-10.7B-dpo-v1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# solar_merge_test_1 ## Configuration ## Usage
[ "# solar_merge_test_1", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #beomi/OPEN-SOLAR-KO-10.7B #hyeogi/SOLAR-10.7B-dpo-v1 #base_model-beomi/OPEN-SOLAR-KO-10.7B #base_model-hyeogi/SOLAR-10.7B-dpo-v1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# solar_merge_test_1", "## Configuration", "## Usage" ]
text-generation
transformers
<img src=https://huggingface.co/lodrick-the-lafted/Copus-2x8B/resolve/main/copus.png> MoE'd up: - [dreamgen/opus-v1.2-llama-3-8b](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b) - [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
{"license": "llama2"}
blockblockblock/Copus-2x8B-bpw4
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-20T06:47:54+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
<img src=URL MoE'd up: - dreamgen/opus-v1.2-llama-3-8b - NousResearch/Meta-Llama-3-8B-Instruct_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
[]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
sebajoe/batchPrompting_13b_25
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T06:48:11+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
sebajoe/batchPrompting_7b_25
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T06:49:20+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
mlx
# lucataco/Meta-Llama-3-70B-4bit This model was converted to MLX format from [`meta-llama/Meta-Llama-3-70B`]() using mlx-lm version **0.10.0**. Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-70B) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("lucataco/Meta-Llama-3-70B-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "mlx"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "extra_gated_prompt": "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity\u2019s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta\u2019s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display \u201cBuilt with Meta Llama 3\u201d on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include \u201cLlama 3\u201d at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \u201cNotice\u201d text file distributed as a part of such copies: \u201cMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright \u00a9 Meta Platforms, Inc. All Rights Reserved.\u201d\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee\u2019s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201cAS IS\u201d BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use \u201cLlama 3\u201d (the \u201cMark\u201d) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
lucataco/Meta-Llama-3-70B-4bit
null
[ "mlx", "safetensors", "llama", "facebook", "meta", "pytorch", "llama-3", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-20T06:51:09+00:00
[]
[ "en" ]
TAGS #mlx #safetensors #llama #facebook #meta #pytorch #llama-3 #text-generation #en #license-other #region-us
# lucataco/Meta-Llama-3-70B-4bit This model was converted to MLX format from ['meta-llama/Meta-Llama-3-70B']() using mlx-lm version 0.10.0. Refer to the original model card for more details on the model. ## Use with mlx
[ "# lucataco/Meta-Llama-3-70B-4bit\nThis model was converted to MLX format from ['meta-llama/Meta-Llama-3-70B']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #llama #facebook #meta #pytorch #llama-3 #text-generation #en #license-other #region-us \n", "# lucataco/Meta-Llama-3-70B-4bit\nThis model was converted to MLX format from ['meta-llama/Meta-Llama-3-70B']() using mlx-lm version 0.10.0.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text-to-image
diffusers
# API Inference ![generated from modelslab.com](https://cdn2.stablediffusionapi.com/generations/bf190b5a-fe19-437c-ba05-82f29cb1f7ad-0.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "realcartoonpixarv9" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/realcartoonpixarv9) Model link: [View model](https://modelslab.com/models/realcartoonpixarv9) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "realcartoonpixarv9", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
{"license": "creativeml-openrail-m", "tags": ["modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic"], "pinned": true}
stablediffusionapi/realcartoonpixarv9
null
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-20T06:52:52+00:00
[]
[]
TAGS #diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# API Inference !generated from URL ## Get API Key Get API key from ModelsLab API, No Payment needed. Replace Key in below code, change model_id to "realcartoonpixarv9" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs Try model for free: Generate Images Model link: View model View all models: View Models import requests import json url = "URL payload = URL({ "key": "your_api_key", "model_id": "realcartoonpixarv9", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(URL) > Use this coupon code to get 25% off DMGG0RBN
[ "# API Inference\n\n!generated from URL", "## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"realcartoonpixarv9\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"realcartoonpixarv9\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN" ]
[ "TAGS\n#diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# API Inference\n\n!generated from URL", "## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"realcartoonpixarv9\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"realcartoonpixarv9\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-dpo-full-accumulation4 This model is a fine-tuned version of [data/zephyr-7b-sft-full-accumulation2](https://huggingface.co/data/zephyr-7b-sft-full-accumulation2) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.5032 - Rewards/chosen: -0.9893 - Rewards/rejected: -2.0234 - Rewards/accuracies: 0.7812 - Rewards/margins: 1.0341 - Logps/rejected: -462.7061 - Logps/chosen: -358.6745 - Logits/rejected: 3.3182 - Logits/chosen: 2.7991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.59 | 0.2093 | 100 | 0.5946 | -0.2826 | -0.6651 | 0.7266 | 0.3825 | -326.8777 | -288.0025 | -2.2764 | -2.3187 | | 0.5622 | 0.4186 | 200 | 0.5490 | -0.5914 | -1.2367 | 0.7578 | 0.6452 | -384.0357 | -318.8896 | -1.6885 | -1.7635 | | 0.5069 | 0.6279 | 300 | 0.5186 | -0.9189 | -1.8568 | 0.7773 | 0.9379 | -446.0468 | -351.6352 | 3.7286 | 3.1924 | | 0.5183 | 0.8373 | 400 | 0.5042 | -1.0384 | -2.0520 | 0.7773 | 1.0136 | -465.5701 | -363.5876 | 3.4727 | 2.9519 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "data/zephyr-7b-sft-full-accumulation2", "model-index": [{"name": "zephyr-7b-dpo-full-accumulation4", "results": []}]}
just1nseo/zephyr-7b-dpo-full-accumulation4
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:data/zephyr-7b-sft-full-accumulation2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T06:54:42+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-data/zephyr-7b-sft-full-accumulation2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
zephyr-7b-dpo-full-accumulation4 ================================ This model is a fine-tuned version of data/zephyr-7b-sft-full-accumulation2 on the HuggingFaceH4/ultrafeedback\_binarized dataset. It achieves the following results on the evaluation set: * Loss: 0.5032 * Rewards/chosen: -0.9893 * Rewards/rejected: -2.0234 * Rewards/accuracies: 0.7812 * Rewards/margins: 1.0341 * Logps/rejected: -462.7061 * Logps/chosen: -358.6745 * Logits/rejected: 3.3182 * Logits/chosen: 2.7991 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-07 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 8 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.1.2+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-data/zephyr-7b-sft-full-accumulation2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
image-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
LeoNight/custom-resnet50d-v2
null
[ "transformers", "safetensors", "resnet-t", "image-classification", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
null
2024-04-20T06:55:06+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #resnet-t #image-classification #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #resnet-t #image-classification #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# solar_merge_test_2 ## 🧩 Configuration ```yaml base_model: beomi/OPEN-SOLAR-KO-10.7B dtype: float16 experts: - source_model: beomi/OPEN-SOLAR-KO-10.7B positive_prompts: ["당신은 친절한 보편적인 어시스턴트이다."] - source_model: hyeogi/SOLAR-10.7B-dpo-v1 positive_prompts: ["당신은 친절한 어시스턴트이다."] - source_model: GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15 positive_prompts: ["당신은 친절한 어시스턴트이다."] - source_model: megastudyedu/M-SOLAR-10.7B-v1.1-beta positive_prompts: ["당신은 친절한 어시스턴트이다."] gate_mode: cheap_embed tokenizer_source: base ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "jieunhan/solar_merge_test_2" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "beomi/OPEN-SOLAR-KO-10.7B", "hyeogi/SOLAR-10.7B-dpo-v1", "GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15", "megastudyedu/M-SOLAR-10.7B-v1.1-beta"], "base_model": ["beomi/OPEN-SOLAR-KO-10.7B", "hyeogi/SOLAR-10.7B-dpo", "GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15", "megastudyedu/M-SOLAR-10.7B-v1.1-beta"]}
jieunhan/solar_merge_test_2
null
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "beomi/OPEN-SOLAR-KO-10.7B", "hyeogi/SOLAR-10.7B-dpo-v1", "GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15", "megastudyedu/M-SOLAR-10.7B-v1.1-beta", "base_model:beomi/OPEN-SOLAR-KO-10.7B", "base_model:hyeogi/SOLAR-10.7B-dpo", "base_model:GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15", "base_model:megastudyedu/M-SOLAR-10.7B-v1.1-beta", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T06:55:13+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #beomi/OPEN-SOLAR-KO-10.7B #hyeogi/SOLAR-10.7B-dpo-v1 #GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15 #megastudyedu/M-SOLAR-10.7B-v1.1-beta #base_model-beomi/OPEN-SOLAR-KO-10.7B #base_model-hyeogi/SOLAR-10.7B-dpo #base_model-GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15 #base_model-megastudyedu/M-SOLAR-10.7B-v1.1-beta #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# solar_merge_test_2 ## Configuration ## Usage
[ "# solar_merge_test_2", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #beomi/OPEN-SOLAR-KO-10.7B #hyeogi/SOLAR-10.7B-dpo-v1 #GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15 #megastudyedu/M-SOLAR-10.7B-v1.1-beta #base_model-beomi/OPEN-SOLAR-KO-10.7B #base_model-hyeogi/SOLAR-10.7B-dpo #base_model-GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15 #base_model-megastudyedu/M-SOLAR-10.7B-v1.1-beta #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# solar_merge_test_2", "## Configuration", "## Usage" ]
text-to-image
diffusers
# API Inference ![generated from modelslab.com](https://cdn2.stablediffusionapi.com/generations/bf190b5a-fe19-437c-ba05-82f29cb1f7ad-0.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "azovyarpg4" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/azovyarpg4) Model link: [View model](https://modelslab.com/models/azovyarpg4) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "azovyarpg4", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
{"license": "creativeml-openrail-m", "tags": ["modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic"], "pinned": true}
stablediffusionapi/azovyarpg4
null
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-20T06:56:21+00:00
[]
[]
TAGS #diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# API Inference !generated from URL ## Get API Key Get API key from ModelsLab API, No Payment needed. Replace Key in below code, change model_id to "azovyarpg4" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs Try model for free: Generate Images Model link: View model View all models: View Models import requests import json url = "URL payload = URL({ "key": "your_api_key", "model_id": "azovyarpg4", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(URL) > Use this coupon code to get 25% off DMGG0RBN
[ "# API Inference\n\n!generated from URL", "## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"azovyarpg4\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"azovyarpg4\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN" ]
[ "TAGS\n#diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# API Inference\n\n!generated from URL", "## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"azovyarpg4\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"azovyarpg4\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN" ]
image-segmentation
transformers
[github](https://github.com/yuyijiong/remote_sense_image_quality_inspection) 使用segformer模型对遥感图像进行质量检测。通过语义分割,标出以下6种类型区域,“背景”代表图片无质量问题,其余代表某一种特定质量问题: "0": "背景", "1": "云", "2": "阴影", "3": "拉花", "4": "模糊", "5": "光谱溢出", "6": "扭曲"
{"language": ["zh"], "license": "cc-by-nc-4.0", "pipeline_tag": "image-segmentation"}
yuyijiong/segformer-b5-remote-sensing-quality
null
[ "transformers", "pytorch", "segformer", "image-segmentation", "zh", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-20T06:59:50+00:00
[]
[ "zh" ]
TAGS #transformers #pytorch #segformer #image-segmentation #zh #license-cc-by-nc-4.0 #endpoints_compatible #region-us
github 使用segformer模型对遥感图像进行质量检测。通过语义分割,标出以下6种类型区域,“背景”代表图片无质量问题,其余代表某一种特定质量问题: "0": "背景", "1": "云", "2": "阴影", "3": "拉花", "4": "模糊", "5": "光谱溢出", "6": "扭曲"
[]
[ "TAGS\n#transformers #pytorch #segformer #image-segmentation #zh #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
wendy41/llama-3-user111-200
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:00:41+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
heyllm234/sc46
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:01:14+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
image-classification
transformers
[github](https://github.com/yuyijiong/remote_sense_image_quality_inspection) 使用swin v2模型检测遥感图像中是否包含以下9种类型的质量问题: "0": "云", "1": "阴影", "2": "拉花", "3": "模糊", "4": "光谱溢出", "5": "扭曲", "6": "拼接痕迹", "7": "拼接错误", "8": "条状噪声" 模型输出为9维向量,每一维的值代表图片中存在此类质量问题的概率,概率大于50%视为存在此类质量问题。
{"language": ["zh"], "license": "cc-by-nc-4.0"}
yuyijiong/swin-v2-base-remote-sensing-quality
null
[ "transformers", "pytorch", "swinv2", "image-classification", "zh", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:01:16+00:00
[]
[ "zh" ]
TAGS #transformers #pytorch #swinv2 #image-classification #zh #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
github 使用swin v2模型检测遥感图像中是否包含以下9种类型的质量问题: "0": "云", "1": "阴影", "2": "拉花", "3": "模糊", "4": "光谱溢出", "5": "扭曲", "6": "拼接痕迹", "7": "拼接错误", "8": "条状噪声" 模型输出为9维向量,每一维的值代表图片中存在此类质量问题的概率,概率大于50%视为存在此类质量问题。
[]
[ "TAGS\n#transformers #pytorch #swinv2 #image-classification #zh #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
TinyPixel/llama-3-adapter
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:04:01+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Llama-3-Smaug-8B ### Built with Meta Llama 3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f95cac5f9ba52bbcd7f/OrcJyTaUtD2HxJOPPwNva.png) This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). ### Model Description - **Developed by:** [Abacus.AI](https://abacus.ai) - **License:** https://llama.meta.com/llama3/license/ - **Finetuned from model:** [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). ## Evaluation ``` ########## First turn ########## score model turn llama3-8b-smaug-2-merged-600 1 8.79375 llama3-8b-smaug-2-merged-150 1 8.71250 llama3-8b-smaug-2-merged-300 1 8.66250 base_Meta-Llama-3-8B-Instruct 1 8.53125 llama3-8b-smaug-2-merged-450 1 8.42500 ########## Second turn ########## score model turn llama3-8b-smaug-2-merged-450 2 7.8125 llama3-8b-smaug-2-merged-300 2 7.7375 llama3-8b-smaug-2-merged-600 2 7.7250 llama3-8b-smaug-2-merged-150 2 7.7125 base_Meta-Llama-3-8B-Instruct 2 7.5500 ########## Average ########## score model llama3-8b-smaug-2-merged-600 8.259375 llama3-8b-smaug-2-merged-150 8.212500 llama3-8b-smaug-2-merged-300 8.200000 llama3-8b-smaug-2-merged-450 8.118750 base_Meta-Llama-3-8B-Instruct 8.040625 ``` | Model | First turn | Second Turn | Average | | :---- | ---------: | ----------: | ------: | | llama3-8b-smaug-2-merged-600 | **8.79** | 7.73 | **8.26** | | llama3-8b-smaug-2-merged-450 | 8.43 | **7.81** | 8.12 | | llama3-8b-smaug-2-merged-300 | 8.66 | 7.74 | 8.20 | | llama3-8b-smaug-2-merged-150 | 8.71 | 7.71 | 8.21 | | Meta-Llama-3-8B-Instruct | 8.53 | 7.55 | 8.04 |
{"license": "llama2", "library_name": "transformers"}
LoneStriker/Llama-3-Smaug-8B-GGUF
null
[ "transformers", "gguf", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:09:10+00:00
[]
[]
TAGS #transformers #gguf #license-llama2 #endpoints_compatible #region-us
Llama-3-Smaug-8B ================ ### Built with Meta Llama 3 !image/png This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B. ### Model Description * Developed by: Abacus.AI * License: URL * Finetuned from model: meta-llama/Meta-Llama-3-8B. Evaluation ----------
[ "### Built with Meta Llama 3\n\n\n!image/png\n\n\nThis model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to\nmeta-llama/Meta-Llama-3-8B.", "### Model Description\n\n\n* Developed by: Abacus.AI\n* License: URL\n* Finetuned from model: meta-llama/Meta-Llama-3-8B.\n\n\nEvaluation\n----------" ]
[ "TAGS\n#transformers #gguf #license-llama2 #endpoints_compatible #region-us \n", "### Built with Meta Llama 3\n\n\n!image/png\n\n\nThis model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to\nmeta-llama/Meta-Llama-3-8B.", "### Model Description\n\n\n* Developed by: Abacus.AI\n* License: URL\n* Finetuned from model: meta-llama/Meta-Llama-3-8B.\n\n\nEvaluation\n----------" ]
null
transformers
# Model Card This model is pretrained as a reference baseline to the Based model provided here: https://huggingface.co/hazyresearch/based-1b-50b. Both checkpoints are pretrained on **50Bn tokens** of the Pile in the exact same data order using next token prediction. A WandB report for training is here: https://api.wandb.ai/links/hazy-research/ggo9rst2 ### Model Sources The model is a standard Mamba model using the model code provided here: https://github.com/state-spaces/mamba/tree/main/mamba_ssm The training code is provided here and can be used to reproduce training: https://github.com/HazyResearch/based The paper for the work is here, and the appendix includes additional experimental details/hyperparameters: https://arxiv.org/abs/2402.18668 ### Uses The purpose of this work is to evaluate the language modeling quality of a new efficient architecture, Based. We include a series of benchmarks that you can use to evaluate quality: - FDA: https://huggingface.co/datasets/hazyresearch/based-fda - SWDE: https://huggingface.co/datasets/hazyresearch/based-swde - SQUAD: https://huggingface.co/datasets/hazyresearch/based-squad ## Citation Please consider citing this paper if you use our work: ``` @article{arora2024simple, title={Simple linear attention language models balance the recall-throughput tradeoff}, author={Arora, Simran and Eyuboglu, Sabri and Zhang, Michael and Timalsina, Aman and Alberti, Silas and Zinsley, Dylan and Zou, James and Rudra, Atri and Ré, Christopher}, journal={arXiv:2402.18668}, year={2024} } ``` Please reach out to [email protected], [email protected], and [email protected] with questions.
{"language": ["en"], "datasets": ["EleutherAI/pile"]}
hazyresearch/mamba-1b-50b
null
[ "transformers", "pytorch", "en", "dataset:EleutherAI/pile", "arxiv:2402.18668", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:09:14+00:00
[ "2402.18668" ]
[ "en" ]
TAGS #transformers #pytorch #en #dataset-EleutherAI/pile #arxiv-2402.18668 #endpoints_compatible #region-us
# Model Card This model is pretrained as a reference baseline to the Based model provided here: URL Both checkpoints are pretrained on 50Bn tokens of the Pile in the exact same data order using next token prediction. A WandB report for training is here: URL ### Model Sources The model is a standard Mamba model using the model code provided here: URL The training code is provided here and can be used to reproduce training: URL The paper for the work is here, and the appendix includes additional experimental details/hyperparameters: URL ### Uses The purpose of this work is to evaluate the language modeling quality of a new efficient architecture, Based. We include a series of benchmarks that you can use to evaluate quality: - FDA: URL - SWDE: URL - SQUAD: URL Please consider citing this paper if you use our work: Please reach out to simarora@URL, eyuboglu@URL, and mzhang20@URL with questions.
[ "# Model Card\n\nThis model is pretrained as a reference baseline to the Based model provided here: URL \n\nBoth checkpoints are pretrained on 50Bn tokens of the Pile in the exact same data order using next token prediction. \n\nA WandB report for training is here: URL", "### Model Sources\n\nThe model is a standard Mamba model using the model code provided here: URL\n\nThe training code is provided here and can be used to reproduce training: URL\n\nThe paper for the work is here, and the appendix includes additional experimental details/hyperparameters: URL", "### Uses\n\nThe purpose of this work is to evaluate the language modeling quality of a new efficient architecture, Based. \n\nWe include a series of benchmarks that you can use to evaluate quality: \n- FDA: URL\n- SWDE: URL\n- SQUAD: URL\n\n\n\n\nPlease consider citing this paper if you use our work: \n\n\n\nPlease reach out to simarora@URL, eyuboglu@URL, and mzhang20@URL with questions." ]
[ "TAGS\n#transformers #pytorch #en #dataset-EleutherAI/pile #arxiv-2402.18668 #endpoints_compatible #region-us \n", "# Model Card\n\nThis model is pretrained as a reference baseline to the Based model provided here: URL \n\nBoth checkpoints are pretrained on 50Bn tokens of the Pile in the exact same data order using next token prediction. \n\nA WandB report for training is here: URL", "### Model Sources\n\nThe model is a standard Mamba model using the model code provided here: URL\n\nThe training code is provided here and can be used to reproduce training: URL\n\nThe paper for the work is here, and the appendix includes additional experimental details/hyperparameters: URL", "### Uses\n\nThe purpose of this work is to evaluate the language modeling quality of a new efficient architecture, Based. \n\nWe include a series of benchmarks that you can use to evaluate quality: \n- FDA: URL\n- SWDE: URL\n- SQUAD: URL\n\n\n\n\nPlease consider citing this paper if you use our work: \n\n\n\nPlease reach out to simarora@URL, eyuboglu@URL, and mzhang20@URL with questions." ]
null
transformers
# Uploaded model - **Developed by:** raviguntakala - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
raviguntakala/llama-3-8b-4bit
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:09:18+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: raviguntakala - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: raviguntakala\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: raviguntakala\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# Uploaded model - **Developed by:** thisurawz1 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
thisurawz1/llama3_unsloth_10ep
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:11:54+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: thisurawz1 - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: thisurawz1\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: thisurawz1\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
<img src=https://huggingface.co/lodrick-the-lafted/Copus-2x8B/resolve/main/copus.png> MoE'd up: - [dreamgen/opus-v1.2-llama-3-8b](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b) - [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
{"license": "llama2"}
blockblockblock/Copus-2x8B-bpw4.2
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T07:12:17+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src=URL MoE'd up: - dreamgen/opus-v1.2-llama-3-8b - NousResearch/Meta-Llama-3-8B-Instruct_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
[]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-firdous-malay-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_13_0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_13_0"], "base_model": "facebook/wav2vec2-large-xlsr-53", "model-index": [{"name": "wav2vec2-large-xls-r-300m-firdous-malay-colab", "results": []}]}
f77777/wav2vec2-large-xls-r-300m-firdous-malay-colab
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_13_0", "base_model:facebook/wav2vec2-large-xlsr-53", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:14:13+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_13_0 #base_model-facebook/wav2vec2-large-xlsr-53 #license-apache-2.0 #endpoints_compatible #region-us
# wav2vec2-large-xls-r-300m-firdous-malay-colab This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice_13_0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
[ "# wav2vec2-large-xls-r-300m-firdous-malay-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice_13_0 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_13_0 #base_model-facebook/wav2vec2-large-xlsr-53 #license-apache-2.0 #endpoints_compatible #region-us \n", "# wav2vec2-large-xls-r-300m-firdous-malay-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice_13_0 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # outputs This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0 - Pytorch 2.2.1+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "google/gemma-2b-it", "model-index": [{"name": "outputs", "results": []}]}
AJosh/outputs
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:google/gemma-2b-it", "license:gemma", "region:us" ]
null
2024-04-20T07:14:59+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-2b-it #license-gemma #region-us
# outputs This model is a fine-tuned version of google/gemma-2b-it on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0 - Pytorch 2.2.1+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
[ "# outputs\n\nThis model is a fine-tuned version of google/gemma-2b-it on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- training_steps: 500\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-2b-it #license-gemma #region-us \n", "# outputs\n\nThis model is a fine-tuned version of google/gemma-2b-it on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- training_steps: 500\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2
{"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"}
UnderstandLing/Llama-3-8B-Instruct-pt
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "region:us" ]
null
2024-04-20T07:16:24+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B-Instruct #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ## Training procedure The following 'bitsandbytes' quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.6.2" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B-Instruct #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.6.2" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2
{"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"}
UnderstandLing/Llama-3-8B-Instruct-it
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "region:us" ]
null
2024-04-20T07:16:37+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B-Instruct #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ## Training procedure The following 'bitsandbytes' quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.6.2" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B-Instruct #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.6.2" ]
text-generation
transformers
This model is the RLHF version of `HuggingFaceH4/mistral-7b-sft-beta` without any external responses. We perform GSHF algorithm on SFT baseline. The external signals include (1) Reward model; (2) AI-generated Prompts. **We obtain 35.95% win-rate (34.79% LC win-rate) on Alpaca Eval v2.** The win-rate of the base model is only 4.63%. For MT-bench, it obtained about 7.5, where the base model is only 5.3. We have demonstrated the significant potential of the iterative RLHF algorithm for LLMs to deliver appropriate and well-structured responses, even without any external responses. ## Model Details We perform 3 iterations of GSHF algorithm on `HuggingFaceH4/mistral-7b-sft-beta` labeled by reward model, where prompts are generated by ChatGPT with self-instruct type prompt augmentation. We use AI-generated 60K prompts in the training process. Examples are as below, ```json {"prompt": "Why is gold considered a good reserve asset for central banks?"} {"prompt": "What are the top 5 yoga poses for stress relief?"} {"prompt": "Craft a blog title about the health implications of eating avocados daily based on their caloric value."} {"prompt": "Design a simple HTML chat interface that simulates a conversation between a user and a bot, displaying two messages from each."} {"prompt": "List 10 names from different cultures that embody the meanings of peace, harmony, or compassion."} ``` ## Uses The usage and chat template format follow the SFT model `HuggingFaceH4/mistral-7b-sft-beta`. ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate import torch from transformers import pipeline pipe = pipeline("text-generation", model="sfairXC/FsfairX-Zephyr-Chat-v0.1", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate"}, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) # <|system|> # You are a friendly chatbot who always responds in the style of a pirate.</s> # <|user|> # How many helicopters can a human eat in one sitting?</s> # <|assistant|> # Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food! ``` ## Evaluation The evaluation on Alpaca Eval v2 are provided as below, | Model | Win Rate | LC Win Rate | Avg Length | |-------------|----------|-------------|------------| | Base | 4.63 | 8.01 | 916 | | Iteration 1 | 13.26 | 20.81 | 1205 | | Iteration 2 | 23.57 | 27.63 | 1623 | | Iteration 3 | 35.95 | 34.79 | 2275 | ## Citation If you found this helpful, please cite the following papers. ```bibtex @article{dong2023raft, title={Raft: Reward ranked finetuning for generative foundation model alignment}, author={Dong, Hanze and Xiong, Wei and Goyal, Deepanshu and Pan, Rui and Diao, Shizhe and Zhang, Jipeng and Shum, Kashun and Zhang, Tong}, journal={arXiv preprint arXiv:2304.06767}, year={2023} } @misc{xiong2024iterative, title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint}, author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang}, year={2024}, eprint={2312.11456}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
{"license": "cc-by-sa-4.0"}
sfairXC/FsfairX-Zephyr-Chat-v0.1
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:2312.11456", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T07:21:04+00:00
[ "2312.11456" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-2312.11456 #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This model is the RLHF version of 'HuggingFaceH4/mistral-7b-sft-beta' without any external responses. We perform GSHF algorithm on SFT baseline. The external signals include (1) Reward model; (2) AI-generated Prompts. We obtain 35.95% win-rate (34.79% LC win-rate) on Alpaca Eval v2. The win-rate of the base model is only 4.63%. For MT-bench, it obtained about 7.5, where the base model is only 5.3. We have demonstrated the significant potential of the iterative RLHF algorithm for LLMs to deliver appropriate and well-structured responses, even without any external responses. Model Details ------------- We perform 3 iterations of GSHF algorithm on 'HuggingFaceH4/mistral-7b-sft-beta' labeled by reward model, where prompts are generated by ChatGPT with self-instruct type prompt augmentation. We use AI-generated 60K prompts in the training process. Examples are as below, Uses ---- The usage and chat template format follow the SFT model 'HuggingFaceH4/mistral-7b-sft-beta'. Evaluation ---------- The evaluation on Alpaca Eval v2 are provided as below, If you found this helpful, please cite the following papers.
[]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-2312.11456 #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2
{"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"}
UnderstandLing/Llama-3-8B-Instruct-ru
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "region:us" ]
null
2024-04-20T07:22:07+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B-Instruct #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ## Training procedure The following 'bitsandbytes' quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.6.2" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B-Instruct #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.6.2" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2
{"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"}
UnderstandLing/Llama-3-8B-Instruct-hi
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "region:us" ]
null
2024-04-20T07:22:24+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B-Instruct #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ## Training procedure The following 'bitsandbytes' quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.6.2" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B-Instruct #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.6.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper ORF Bundeslaender This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the ZIB2 Common Voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3878 - Wer: 17.2956 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.3943 | 1.7153 | 1000 | 0.4072 | 17.5540 | | 0.3431 | 3.4305 | 2000 | 0.3922 | 17.3458 | | 0.3961 | 5.1458 | 3000 | 0.3885 | 17.3506 | | 0.3548 | 6.8611 | 4000 | 0.3878 | 17.2956 | ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["de"], "license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "datasets": ["rmacek/ORF-whisper-large-v3"], "metrics": ["wer"], "base_model": "openai/whisper-large-v3", "model-index": [{"name": "Whisper ORF Bundeslaender", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "ZIB2 Common Voice", "type": "rmacek/ORF-whisper-large-v3", "args": "config: de, split: test"}, "metrics": [{"type": "wer", "value": 17.29558995956067, "name": "Wer"}]}]}]}
rmacek/ORF-large-v3-de
null
[ "peft", "tensorboard", "safetensors", "whisper", "generated_from_trainer", "de", "dataset:rmacek/ORF-whisper-large-v3", "base_model:openai/whisper-large-v3", "license:apache-2.0", "model-index", "region:us" ]
null
2024-04-20T07:23:20+00:00
[]
[ "de" ]
TAGS #peft #tensorboard #safetensors #whisper #generated_from_trainer #de #dataset-rmacek/ORF-whisper-large-v3 #base_model-openai/whisper-large-v3 #license-apache-2.0 #model-index #region-us
Whisper ORF Bundeslaender ========================= This model is a fine-tuned version of openai/whisper-large-v3 on the ZIB2 Common Voice dataset. It achieves the following results on the evaluation set: * Loss: 0.3878 * Wer: 17.2956 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 4000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.1.dev0 * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #whisper #generated_from_trainer #de #dataset-rmacek/ORF-whisper-large-v3 #base_model-openai/whisper-large-v3 #license-apache-2.0 #model-index #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-finetuned-samsum This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "model-index": [{"name": "mistral-finetuned-samsum", "results": []}]}
siddharth-magesh/mistral-finetuned-samsum
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-04-20T07:25:56+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.1-GPTQ #license-apache-2.0 #region-us
# mistral-finetuned-samsum This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# mistral-finetuned-samsum\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 250\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.41.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.1-GPTQ #license-apache-2.0 #region-us \n", "# mistral-finetuned-samsum\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 250\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.41.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
text-generation
transformers
# Llama-3-Smaug-8B ### Built with Meta Llama 3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f95cac5f9ba52bbcd7f/OrcJyTaUtD2HxJOPPwNva.png) This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). ### Model Description - **Developed by:** [Abacus.AI](https://abacus.ai) - **License:** https://llama.meta.com/llama3/license/ - **Finetuned from model:** [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). ## Evaluation ``` ########## First turn ########## score model turn llama3-8b-smaug-2-merged-600 1 8.79375 llama3-8b-smaug-2-merged-150 1 8.71250 llama3-8b-smaug-2-merged-300 1 8.66250 base_Meta-Llama-3-8B-Instruct 1 8.53125 llama3-8b-smaug-2-merged-450 1 8.42500 ########## Second turn ########## score model turn llama3-8b-smaug-2-merged-450 2 7.8125 llama3-8b-smaug-2-merged-300 2 7.7375 llama3-8b-smaug-2-merged-600 2 7.7250 llama3-8b-smaug-2-merged-150 2 7.7125 base_Meta-Llama-3-8B-Instruct 2 7.5500 ########## Average ########## score model llama3-8b-smaug-2-merged-600 8.259375 llama3-8b-smaug-2-merged-150 8.212500 llama3-8b-smaug-2-merged-300 8.200000 llama3-8b-smaug-2-merged-450 8.118750 base_Meta-Llama-3-8B-Instruct 8.040625 ``` | Model | First turn | Second Turn | Average | | :---- | ---------: | ----------: | ------: | | llama3-8b-smaug-2-merged-600 | **8.79** | 7.73 | **8.26** | | llama3-8b-smaug-2-merged-450 | 8.43 | **7.81** | 8.12 | | llama3-8b-smaug-2-merged-300 | 8.66 | 7.74 | 8.20 | | llama3-8b-smaug-2-merged-150 | 8.71 | 7.71 | 8.21 | | Meta-Llama-3-8B-Instruct | 8.53 | 7.55 | 8.04 |
{"license": "llama2", "library_name": "transformers"}
LoneStriker/Llama-3-Smaug-8B-3.0bpw-h6-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "3-bit", "region:us" ]
null
2024-04-20T07:28:27+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us
Llama-3-Smaug-8B ================ ### Built with Meta Llama 3 !image/png This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B. ### Model Description * Developed by: Abacus.AI * License: URL * Finetuned from model: meta-llama/Meta-Llama-3-8B. Evaluation ----------
[ "### Built with Meta Llama 3\n\n\n!image/png\n\n\nThis model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to\nmeta-llama/Meta-Llama-3-8B.", "### Model Description\n\n\n* Developed by: Abacus.AI\n* License: URL\n* Finetuned from model: meta-llama/Meta-Llama-3-8B.\n\n\nEvaluation\n----------" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us \n", "### Built with Meta Llama 3\n\n\n!image/png\n\n\nThis model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to\nmeta-llama/Meta-Llama-3-8B.", "### Model Description\n\n\n* Developed by: Abacus.AI\n* License: URL\n* Finetuned from model: meta-llama/Meta-Llama-3-8B.\n\n\nEvaluation\n----------" ]
text-generation
transformers
# Llama-3-Smaug-8B ### Built with Meta Llama 3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f95cac5f9ba52bbcd7f/OrcJyTaUtD2HxJOPPwNva.png) This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). ### Model Description - **Developed by:** [Abacus.AI](https://abacus.ai) - **License:** https://llama.meta.com/llama3/license/ - **Finetuned from model:** [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). ## Evaluation ``` ########## First turn ########## score model turn llama3-8b-smaug-2-merged-600 1 8.79375 llama3-8b-smaug-2-merged-150 1 8.71250 llama3-8b-smaug-2-merged-300 1 8.66250 base_Meta-Llama-3-8B-Instruct 1 8.53125 llama3-8b-smaug-2-merged-450 1 8.42500 ########## Second turn ########## score model turn llama3-8b-smaug-2-merged-450 2 7.8125 llama3-8b-smaug-2-merged-300 2 7.7375 llama3-8b-smaug-2-merged-600 2 7.7250 llama3-8b-smaug-2-merged-150 2 7.7125 base_Meta-Llama-3-8B-Instruct 2 7.5500 ########## Average ########## score model llama3-8b-smaug-2-merged-600 8.259375 llama3-8b-smaug-2-merged-150 8.212500 llama3-8b-smaug-2-merged-300 8.200000 llama3-8b-smaug-2-merged-450 8.118750 base_Meta-Llama-3-8B-Instruct 8.040625 ``` | Model | First turn | Second Turn | Average | | :---- | ---------: | ----------: | ------: | | llama3-8b-smaug-2-merged-600 | **8.79** | 7.73 | **8.26** | | llama3-8b-smaug-2-merged-450 | 8.43 | **7.81** | 8.12 | | llama3-8b-smaug-2-merged-300 | 8.66 | 7.74 | 8.20 | | llama3-8b-smaug-2-merged-150 | 8.71 | 7.71 | 8.21 | | Meta-Llama-3-8B-Instruct | 8.53 | 7.55 | 8.04 |
{"license": "llama2", "library_name": "transformers"}
LoneStriker/Llama-3-Smaug-8B-4.0bpw-h6-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-20T07:30:19+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Llama-3-Smaug-8B ================ ### Built with Meta Llama 3 !image/png This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B. ### Model Description * Developed by: Abacus.AI * License: URL * Finetuned from model: meta-llama/Meta-Llama-3-8B. Evaluation ----------
[ "### Built with Meta Llama 3\n\n\n!image/png\n\n\nThis model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to\nmeta-llama/Meta-Llama-3-8B.", "### Model Description\n\n\n* Developed by: Abacus.AI\n* License: URL\n* Finetuned from model: meta-llama/Meta-Llama-3-8B.\n\n\nEvaluation\n----------" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "### Built with Meta Llama 3\n\n\n!image/png\n\n\nThis model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to\nmeta-llama/Meta-Llama-3-8B.", "### Model Description\n\n\n* Developed by: Abacus.AI\n* License: URL\n* Finetuned from model: meta-llama/Meta-Llama-3-8B.\n\n\nEvaluation\n----------" ]
text-generation
transformers
# Uploaded model - **Developed by:** jackdawboy - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
jackdawboy/llama3-8b-Chinese-ft
null
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:31:23+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: jackdawboy - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: jackdawboy\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: jackdawboy\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# Llama-3-Smaug-8B ### Built with Meta Llama 3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f95cac5f9ba52bbcd7f/OrcJyTaUtD2HxJOPPwNva.png) This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). ### Model Description - **Developed by:** [Abacus.AI](https://abacus.ai) - **License:** https://llama.meta.com/llama3/license/ - **Finetuned from model:** [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). ## Evaluation ``` ########## First turn ########## score model turn llama3-8b-smaug-2-merged-600 1 8.79375 llama3-8b-smaug-2-merged-150 1 8.71250 llama3-8b-smaug-2-merged-300 1 8.66250 base_Meta-Llama-3-8B-Instruct 1 8.53125 llama3-8b-smaug-2-merged-450 1 8.42500 ########## Second turn ########## score model turn llama3-8b-smaug-2-merged-450 2 7.8125 llama3-8b-smaug-2-merged-300 2 7.7375 llama3-8b-smaug-2-merged-600 2 7.7250 llama3-8b-smaug-2-merged-150 2 7.7125 base_Meta-Llama-3-8B-Instruct 2 7.5500 ########## Average ########## score model llama3-8b-smaug-2-merged-600 8.259375 llama3-8b-smaug-2-merged-150 8.212500 llama3-8b-smaug-2-merged-300 8.200000 llama3-8b-smaug-2-merged-450 8.118750 base_Meta-Llama-3-8B-Instruct 8.040625 ``` | Model | First turn | Second Turn | Average | | :---- | ---------: | ----------: | ------: | | llama3-8b-smaug-2-merged-600 | **8.79** | 7.73 | **8.26** | | llama3-8b-smaug-2-merged-450 | 8.43 | **7.81** | 8.12 | | llama3-8b-smaug-2-merged-300 | 8.66 | 7.74 | 8.20 | | llama3-8b-smaug-2-merged-150 | 8.71 | 7.71 | 8.21 | | Meta-Llama-3-8B-Instruct | 8.53 | 7.55 | 8.04 |
{"license": "llama2", "library_name": "transformers"}
LoneStriker/Llama-3-Smaug-8B-5.0bpw-h6-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "5-bit", "region:us" ]
null
2024-04-20T07:32:29+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us
Llama-3-Smaug-8B ================ ### Built with Meta Llama 3 !image/png This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B. ### Model Description * Developed by: Abacus.AI * License: URL * Finetuned from model: meta-llama/Meta-Llama-3-8B. Evaluation ----------
[ "### Built with Meta Llama 3\n\n\n!image/png\n\n\nThis model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to\nmeta-llama/Meta-Llama-3-8B.", "### Model Description\n\n\n* Developed by: Abacus.AI\n* License: URL\n* Finetuned from model: meta-llama/Meta-Llama-3-8B.\n\n\nEvaluation\n----------" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us \n", "### Built with Meta Llama 3\n\n\n!image/png\n\n\nThis model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to\nmeta-llama/Meta-Llama-3-8B.", "### Model Description\n\n\n* Developed by: Abacus.AI\n* License: URL\n* Finetuned from model: meta-llama/Meta-Llama-3-8B.\n\n\nEvaluation\n----------" ]
text-generation
transformers
# Llama-3-Smaug-8B ### Built with Meta Llama 3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f95cac5f9ba52bbcd7f/OrcJyTaUtD2HxJOPPwNva.png) This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). ### Model Description - **Developed by:** [Abacus.AI](https://abacus.ai) - **License:** https://llama.meta.com/llama3/license/ - **Finetuned from model:** [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). ## Evaluation ``` ########## First turn ########## score model turn llama3-8b-smaug-2-merged-600 1 8.79375 llama3-8b-smaug-2-merged-150 1 8.71250 llama3-8b-smaug-2-merged-300 1 8.66250 base_Meta-Llama-3-8B-Instruct 1 8.53125 llama3-8b-smaug-2-merged-450 1 8.42500 ########## Second turn ########## score model turn llama3-8b-smaug-2-merged-450 2 7.8125 llama3-8b-smaug-2-merged-300 2 7.7375 llama3-8b-smaug-2-merged-600 2 7.7250 llama3-8b-smaug-2-merged-150 2 7.7125 base_Meta-Llama-3-8B-Instruct 2 7.5500 ########## Average ########## score model llama3-8b-smaug-2-merged-600 8.259375 llama3-8b-smaug-2-merged-150 8.212500 llama3-8b-smaug-2-merged-300 8.200000 llama3-8b-smaug-2-merged-450 8.118750 base_Meta-Llama-3-8B-Instruct 8.040625 ``` | Model | First turn | Second Turn | Average | | :---- | ---------: | ----------: | ------: | | llama3-8b-smaug-2-merged-600 | **8.79** | 7.73 | **8.26** | | llama3-8b-smaug-2-merged-450 | 8.43 | **7.81** | 8.12 | | llama3-8b-smaug-2-merged-300 | 8.66 | 7.74 | 8.20 | | llama3-8b-smaug-2-merged-150 | 8.71 | 7.71 | 8.21 | | Meta-Llama-3-8B-Instruct | 8.53 | 7.55 | 8.04 |
{"license": "llama2", "library_name": "transformers"}
LoneStriker/Llama-3-Smaug-8B-6.0bpw-h6-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "6-bit", "region:us" ]
null
2024-04-20T07:34:59+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us
Llama-3-Smaug-8B ================ ### Built with Meta Llama 3 !image/png This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B. ### Model Description * Developed by: Abacus.AI * License: URL * Finetuned from model: meta-llama/Meta-Llama-3-8B. Evaluation ----------
[ "### Built with Meta Llama 3\n\n\n!image/png\n\n\nThis model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to\nmeta-llama/Meta-Llama-3-8B.", "### Model Description\n\n\n* Developed by: Abacus.AI\n* License: URL\n* Finetuned from model: meta-llama/Meta-Llama-3-8B.\n\n\nEvaluation\n----------" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us \n", "### Built with Meta Llama 3\n\n\n!image/png\n\n\nThis model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to\nmeta-llama/Meta-Llama-3-8B.", "### Model Description\n\n\n* Developed by: Abacus.AI\n* License: URL\n* Finetuned from model: meta-llama/Meta-Llama-3-8B.\n\n\nEvaluation\n----------" ]
text-generation
transformers
<img src=https://huggingface.co/lodrick-the-lafted/Copus-2x8B/resolve/main/copus.png> MoE'd up: - [dreamgen/opus-v1.2-llama-3-8b](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b) - [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
{"license": "llama2"}
blockblockblock/Copus-2x8B-bpw4.4
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T07:36:52+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src=URL MoE'd up: - dreamgen/opus-v1.2-llama-3-8b - NousResearch/Meta-Llama-3-8B-Instruct_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
[]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
## What is this? This is alternative configuration for some special tokens for Opus Llama 3 models. This is needed for some backends because of the issues described here: - https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b/discussions/3 - https://github.com/ggerganov/llama.cpp/issues/6770 ## What are the changes? First, DreamGen Opus models use a variant of ChatML, so we rename `<|start_header_id|>` to `<|im_start|>` and `<|eot_id|>` to `<|im_end|>`. This is already done in the DreamGen Opus Llama 3 fp16 repos. Then, in order to address the issues some backends are having with Llama 3's special tokens: - We set `"special": false` for both `<|im_start|>` and `<|im_end|>` in various places. This will allow them to be rendered by some frontends. Originally discovered [here](https://github.com/ggerganov/llama.cpp/pull/6745#issuecomment-2066914808). - We set the EOS token to `<|im_end|>` in various places. I consider using `<|im_end|>` as a stop token as suboptimal, as in multi-character scenarios I like to let he model generate multiple character messages at once, and this prevents that. But for now, until we can get custom stop strings with special tokens working, this is the best we have.
{}
dreamgen/opus-llama-3-tokens-alt
null
[ "transformers", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T07:37:03+00:00
[]
[]
TAGS #transformers #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
## What is this? This is alternative configuration for some special tokens for Opus Llama 3 models. This is needed for some backends because of the issues described here: - URL - URL ## What are the changes? First, DreamGen Opus models use a variant of ChatML, so we rename '<|start_header_id|>' to '<|im_start|>' and '<|eot_id|>' to '<|im_end|>'. This is already done in the DreamGen Opus Llama 3 fp16 repos. Then, in order to address the issues some backends are having with Llama 3's special tokens: - We set '"special": false' for both '<|im_start|>' and '<|im_end|>' in various places. This will allow them to be rendered by some frontends. Originally discovered here. - We set the EOS token to '<|im_end|>' in various places. I consider using '<|im_end|>' as a stop token as suboptimal, as in multi-character scenarios I like to let he model generate multiple character messages at once, and this prevents that. But for now, until we can get custom stop strings with special tokens working, this is the best we have.
[ "## What is this?\n\nThis is alternative configuration for some special tokens for Opus Llama 3 models.\nThis is needed for some backends because of the issues described here:\n\n- URL\n- URL", "## What are the changes?\n\nFirst, DreamGen Opus models use a variant of ChatML, so we rename '<|start_header_id|>' to '<|im_start|>' and '<|eot_id|>' to '<|im_end|>'. This is already done in the DreamGen Opus Llama 3 fp16 repos.\n\nThen, in order to address the issues some backends are having with Llama 3's special tokens:\n\n- We set '\"special\": false' for both '<|im_start|>' and '<|im_end|>' in various places. This will allow them to be rendered by some frontends. Originally discovered here.\n- We set the EOS token to '<|im_end|>' in various places.\n\nI consider using '<|im_end|>' as a stop token as suboptimal, as in multi-character scenarios I like to let he model generate multiple character messages at once, and this prevents that. But for now, until we can get custom stop strings with special tokens working, this is the best we have." ]
[ "TAGS\n#transformers #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## What is this?\n\nThis is alternative configuration for some special tokens for Opus Llama 3 models.\nThis is needed for some backends because of the issues described here:\n\n- URL\n- URL", "## What are the changes?\n\nFirst, DreamGen Opus models use a variant of ChatML, so we rename '<|start_header_id|>' to '<|im_start|>' and '<|eot_id|>' to '<|im_end|>'. This is already done in the DreamGen Opus Llama 3 fp16 repos.\n\nThen, in order to address the issues some backends are having with Llama 3's special tokens:\n\n- We set '\"special\": false' for both '<|im_start|>' and '<|im_end|>' in various places. This will allow them to be rendered by some frontends. Originally discovered here.\n- We set the EOS token to '<|im_end|>' in various places.\n\nI consider using '<|im_end|>' as a stop token as suboptimal, as in multi-character scenarios I like to let he model generate multiple character messages at once, and this prevents that. But for now, until we can get custom stop strings with special tokens working, this is the best we have." ]
text-generation
transformers
# Llama-3-Smaug-8B ### Built with Meta Llama 3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f95cac5f9ba52bbcd7f/OrcJyTaUtD2HxJOPPwNva.png) This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). ### Model Description - **Developed by:** [Abacus.AI](https://abacus.ai) - **License:** https://llama.meta.com/llama3/license/ - **Finetuned from model:** [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). ## Evaluation ``` ########## First turn ########## score model turn llama3-8b-smaug-2-merged-600 1 8.79375 llama3-8b-smaug-2-merged-150 1 8.71250 llama3-8b-smaug-2-merged-300 1 8.66250 base_Meta-Llama-3-8B-Instruct 1 8.53125 llama3-8b-smaug-2-merged-450 1 8.42500 ########## Second turn ########## score model turn llama3-8b-smaug-2-merged-450 2 7.8125 llama3-8b-smaug-2-merged-300 2 7.7375 llama3-8b-smaug-2-merged-600 2 7.7250 llama3-8b-smaug-2-merged-150 2 7.7125 base_Meta-Llama-3-8B-Instruct 2 7.5500 ########## Average ########## score model llama3-8b-smaug-2-merged-600 8.259375 llama3-8b-smaug-2-merged-150 8.212500 llama3-8b-smaug-2-merged-300 8.200000 llama3-8b-smaug-2-merged-450 8.118750 base_Meta-Llama-3-8B-Instruct 8.040625 ``` | Model | First turn | Second Turn | Average | | :---- | ---------: | ----------: | ------: | | llama3-8b-smaug-2-merged-600 | **8.79** | 7.73 | **8.26** | | llama3-8b-smaug-2-merged-450 | 8.43 | **7.81** | 8.12 | | llama3-8b-smaug-2-merged-300 | 8.66 | 7.74 | 8.20 | | llama3-8b-smaug-2-merged-150 | 8.71 | 7.71 | 8.21 | | Meta-Llama-3-8B-Instruct | 8.53 | 7.55 | 8.04 |
{"license": "llama2", "library_name": "transformers"}
LoneStriker/Llama-3-Smaug-8B-8.0bpw-h8-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-20T07:37:49+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Llama-3-Smaug-8B ================ ### Built with Meta Llama 3 !image/png This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to meta-llama/Meta-Llama-3-8B. ### Model Description * Developed by: Abacus.AI * License: URL * Finetuned from model: meta-llama/Meta-Llama-3-8B. Evaluation ----------
[ "### Built with Meta Llama 3\n\n\n!image/png\n\n\nThis model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to\nmeta-llama/Meta-Llama-3-8B.", "### Model Description\n\n\n* Developed by: Abacus.AI\n* License: URL\n* Finetuned from model: meta-llama/Meta-Llama-3-8B.\n\n\nEvaluation\n----------" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "### Built with Meta Llama 3\n\n\n!image/png\n\n\nThis model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to\nmeta-llama/Meta-Llama-3-8B.", "### Model Description\n\n\n* Developed by: Abacus.AI\n* License: URL\n* Finetuned from model: meta-llama/Meta-Llama-3-8B.\n\n\nEvaluation\n----------" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
AJosh/G-22-1
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:39:40+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
AL-Sayed/PHY-AI1
null
[ "transformers", "safetensors", "gemma", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:40:51+00:00
[]
[]
TAGS #transformers #safetensors #gemma #text-generation #autotrain #text-generation-inference #peft #conversational #license-other #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit AutoTrain. # Usage
[ "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #autotrain #text-generation-inference #peft #conversational #license-other #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
null
transformers
# Uploaded model - **Developed by:** dattaraj - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
dattaraj/llama3-8b-finetuned
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:42:09+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: dattaraj - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: dattaraj\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: dattaraj\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-classification
transformers
This reward function can be used for RLHF, including PPO, iterative SFT, iterative DPO. The license is derived from `PKU-Alignment/PKU-SafeRLHF-30K`. ## Training The base model is `meta-llama/Meta-Llama-3-8B-Instruct`. We use the training script at `https://github.com/WeiXiongUST/RLHF-Reward-Modeling`. ## Uses ```python from transformers import AutoTokenizer, pipeline rm_tokenizer = AutoTokenizer.from_pretrained("sfairXC/FsfairX-LLaMA3-RM-v0.1") device = 0 # accelerator.device rm_pipe = pipeline( "sentiment-analysis", model="sfairXC/FsfairX-LLaMA3-RM-v0.1", #device="auto", device=device, tokenizer=rm_tokenizer, model_kwargs={"torch_dtype": torch.bfloat16} ) pipe_kwargs = { "return_all_scores": True, "function_to_apply": "none", "batch_size": 1 } chat = [ {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ] test_texts = [tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False).replace(tokenizer.bos_token, "")] pipe_outputs = rm_pipe(test_texts, **pipe_kwargs) rewards = [output[0]["score"] for output in pipe_outputs] ``` ## Results This Reward model is the SOTA open-source RM (Apr 20, 2024) on Reward-Bench. | Metric | Score | |--------------|--------| | Chat | 99.44 | | Chat Hard | 65.13 | | Safety | 88.76 | | Reasoning | 88.3 | ## References The repo was part of the iterative rejection sampling fine-tuning and iterative DPO. If you find the content of this repo useful in your work, please consider cite it as follows: ```bibtex @article{dong2023raft, title={Raft: Reward ranked finetuning for generative foundation model alignment}, author={Dong, Hanze and Xiong, Wei and Goyal, Deepanshu and Pan, Rui and Diao, Shizhe and Zhang, Jipeng and Shum, Kashun and Zhang, Tong}, journal={arXiv preprint arXiv:2304.06767}, year={2023} } @misc{xiong2024iterative, title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint}, author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang}, year={2024}, eprint={2312.11456}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
{"license": "cc-by-nc-4.0"}
sfairXC/FsfairX-LLaMA3-RM-v0.1
null
[ "transformers", "safetensors", "llama", "text-classification", "arxiv:2312.11456", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T07:42:52+00:00
[ "2312.11456" ]
[]
TAGS #transformers #safetensors #llama #text-classification #arxiv-2312.11456 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This reward function can be used for RLHF, including PPO, iterative SFT, iterative DPO. The license is derived from 'PKU-Alignment/PKU-SafeRLHF-30K'. Training -------- The base model is 'meta-llama/Meta-Llama-3-8B-Instruct'. We use the training script at 'URL Uses ---- Results ------- This Reward model is the SOTA open-source RM (Apr 20, 2024) on Reward-Bench. References ---------- The repo was part of the iterative rejection sampling fine-tuning and iterative DPO. If you find the content of this repo useful in your work, please consider cite it as follows:
[]
[ "TAGS\n#transformers #safetensors #llama #text-classification #arxiv-2312.11456 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
transformers
# Uploaded model - **Developed by:** catastropiyush - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-bnb-4bit"}
catastropiyush/Alpaca_Mistral_finetune_GGUF
null
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:47:07+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: catastropiyush - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: catastropiyush\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: catastropiyush\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-airline-sentiment-analysis This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1600 - F1: 81.2771 - Gen Len: 2.8311 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "google/flan-t5-base", "model-index": [{"name": "flan-t5-base-airline-sentiment-analysis", "results": []}]}
sudhanshusinghaiml/flan-t5-base-airline-sentiment-analysis
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T07:49:35+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# flan-t5-base-airline-sentiment-analysis This model is a fine-tuned version of google/flan-t5-base on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1600 - F1: 81.2771 - Gen Len: 2.8311 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
[ "# flan-t5-base-airline-sentiment-analysis\n\nThis model is a fine-tuned version of google/flan-t5-base on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1600\n- F1: 81.2771\n- Gen Len: 2.8311", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google/flan-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# flan-t5-base-airline-sentiment-analysis\n\nThis model is a fine-tuned version of google/flan-t5-base on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1600\n- F1: 81.2771\n- Gen Len: 2.8311", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v2-xlarge-otat-recommened-hp This model is a fine-tuned version of [microsoft/deberta-v2-xlarge](https://huggingface.co/microsoft/deberta-v2-xlarge) on the DandinPower/review_onlytitleandtext dataset. It achieves the following results on the evaluation set: - Loss: 0.7741 - Accuracy: 0.6777 - Macro F1: 0.6756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 0.7904 | 1.14 | 500 | 0.8056 | 0.6661 | 0.6641 | | 0.7232 | 2.29 | 1000 | 0.7701 | 0.6783 | 0.6757 | | 0.6944 | 3.43 | 1500 | 0.7669 | 0.681 | 0.6802 | | 0.6795 | 4.57 | 2000 | 0.7741 | 0.6777 | 0.6756 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["en"], "license": "mit", "tags": ["nycu-112-2-datamining-hw2", "generated_from_trainer"], "datasets": ["DandinPower/review_onlytitleandtext"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v2-xlarge", "model-index": [{"name": "deberta-v2-xlarge-otat-recommened-hp", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "DandinPower/review_onlytitleandtext", "type": "DandinPower/review_onlytitleandtext"}, "metrics": [{"type": "accuracy", "value": 0.6777142857142857, "name": "Accuracy"}]}]}]}
DandinPower/deberta-v2-xlarge-otat-recommened-hp
null
[ "transformers", "safetensors", "deberta-v2", "text-classification", "nycu-112-2-datamining-hw2", "generated_from_trainer", "en", "dataset:DandinPower/review_onlytitleandtext", "base_model:microsoft/deberta-v2-xlarge", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:50:22+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #deberta-v2 #text-classification #nycu-112-2-datamining-hw2 #generated_from_trainer #en #dataset-DandinPower/review_onlytitleandtext #base_model-microsoft/deberta-v2-xlarge #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
deberta-v2-xlarge-otat-recommened-hp ==================================== This model is a fine-tuned version of microsoft/deberta-v2-xlarge on the DandinPower/review\_onlytitleandtext dataset. It achieves the following results on the evaluation set: * Loss: 0.7741 * Accuracy: 0.6777 * Macro F1: 0.6756 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-06 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1 * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-06\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #deberta-v2 #text-classification #nycu-112-2-datamining-hw2 #generated_from_trainer #en #dataset-DandinPower/review_onlytitleandtext #base_model-microsoft/deberta-v2-xlarge #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-06\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # v7_trained_weigths This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4020 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0894 | 1.0 | 1622 | 1.4020 | ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "v7_trained_weigths", "results": []}]}
AmirlyPhd/v7_trained_weigths
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-04-20T07:51:44+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-hf #license-llama2 #region-us
v7\_trained\_weigths ==================== This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.4020 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 1 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.03 * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.7.2.dev0 * Transformers 4.36.2 * Pytorch 2.1.2 * Datasets 2.16.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.2.dev0\n* Transformers 4.36.2\n* Pytorch 2.1.2\n* Datasets 2.16.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-hf #license-llama2 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.2.dev0\n* Transformers 4.36.2\n* Pytorch 2.1.2\n* Datasets 2.16.1\n* Tokenizers 0.15.2" ]
image-classification
transformers
Returns treffic sign given an image. See https://www.kaggle.com/code/dima806/traffic-sign-detection-vit for more details. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6449300e3adf50d864095b90/uhZDh0zuqG4xZYMGwfTEN.png) ``` Classification report: precision recall f1-score support Bicycles crossing 1.0000 0.9660 0.9827 206 Children crossing 0.8583 1.0000 0.9238 206 Danger Ahead 0.9810 1.0000 0.9904 206 Dangerous curve to the left 0.7981 0.8293 0.8134 205 Dangerous curve to the right 0.8182 0.7902 0.8040 205 Dont Go Left 1.0000 0.9903 0.9951 206 Dont Go Left or Right 1.0000 1.0000 1.0000 206 Dont Go Right 1.0000 0.9610 0.9801 205 Dont Go straight 1.0000 1.0000 1.0000 205 Dont Go straight or left 0.9493 1.0000 0.9740 206 Dont overtake from Left 0.9533 0.9903 0.9714 206 Fences 0.9762 1.0000 0.9880 205 Go Left 0.9844 0.9175 0.9497 206 Go Left or right 0.8723 1.0000 0.9318 205 Go Right 1.0000 0.9854 0.9926 205 Go left or straight 0.7946 0.8683 0.8298 205 Go right or straight 0.8920 0.7621 0.8220 206 Go straight 0.9624 0.8689 0.9133 206 Go straight or right 1.0000 0.8010 0.8895 206 Heavy Vehicle Accidents 0.9579 1.0000 0.9785 205 Horn 1.0000 1.0000 1.0000 206 No Car 1.0000 1.0000 1.0000 205 No Uturn 0.9856 1.0000 0.9928 206 No entry 1.0000 1.0000 1.0000 205 No horn 1.0000 1.0000 1.0000 205 No stopping 0.9856 1.0000 0.9927 205 Road Divider 1.0000 1.0000 1.0000 206 Roundabout mandatory 0.9951 1.0000 0.9976 205 Speed limit (15km/h) 1.0000 1.0000 1.0000 206 Speed limit (30km/h) 0.9619 0.9806 0.9712 206 Speed limit (40km/h) 0.9800 0.9515 0.9655 206 Speed limit (50km/h) 0.9757 0.9757 0.9757 206 Speed limit (5km/h) 1.0000 0.9951 0.9976 206 Speed limit (60km/h) 1.0000 0.4126 0.5842 206 Speed limit (70km/h) 1.0000 0.9466 0.9726 206 Train Crossing 0.9671 1.0000 0.9833 206 Under Construction 1.0000 0.9806 0.9902 206 Unknown 1.0000 0.5415 0.7025 205 Uturn 1.0000 1.0000 1.0000 205 Zebra Crossing 0.9206 0.9563 0.9381 206 ZigZag Curve 0.8047 1.0000 0.8918 206 keep Left 0.7895 0.8010 0.7952 206 keep Right 0.8565 0.9902 0.9186 205 speed limit (80km/h) 0.6042 0.9854 0.7491 206 watch out for cars 1.0000 1.0000 1.0000 205 accuracy 0.9388 9252 macro avg 0.9472 0.9388 0.9366 9252 weighted avg 0.9472 0.9388 0.9366 9252 ```
{"license": "apache-2.0", "metrics": ["accuracy", "f1"]}
dima806/traffic_sign_detection
null
[ "transformers", "safetensors", "vit", "image-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:53:19+00:00
[]
[]
TAGS #transformers #safetensors #vit #image-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Returns treffic sign given an image. See URL for more details. !image/png
[]
[ "TAGS\n#transformers #safetensors #vit #image-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]